Moneyball Linguistics

[This is just a fun thought experiment. Please don’t get mad.]

The other day I had an intrusive thought: the phrase moneyball linguistics. Of course, as soon as I had a moment to myself, I had to sit down and think what this might denote. At first I imagined building out a linguistics program on a small budget like Billy Beane and the Oakland A’s. But it seems to me that linguistics departments aren’t really much like baseball teams—they’re only vaguely competitive (occasionally for graduate students or junior faculty), there’s no imperative to balance the roster, there’s no DL list (or is just sabbatical?), and so on—and the metaphor sort of breaks down. But the ideas of Beane and co. do seem to have some relevance to talking about individual linguists and labs. I don’t have OBP or slugging percentage for linguists, and I wouldn’t dare to propose anything so crude, but I think we can talk about linguists and their research as a sort of “cost center” and identify two major types of “costs” for the working linguist:

  1. cash (money, dough, moolah, chedda, cheese, skrilla, C.R.E.A.M., green), and
  2. carbon (…dioxide emissions).

I think it is a perfectly fine scientific approximation (not unlike competence vs. performance) to treat the linguistic universe as having a fixed amount of cash and carbon, so that we could use this thinking to build out a roster-department and come in just under the pay cap. While state research budgets do fluctuate—and while our imaginings of a better world should also include more science funding—it is hard to imagine near-term political change in the West would substantially increase it. And similarly, while there is roughly 1012 kg of carbon in the earth’s crust, climate scientists agree that the vast majority of it really ought to stay there. Finally, I should note that maybe we shouldn’t treat these as independent factors, given that there is a non-trivial amount of linguistics funding via petrodollars. But anyways, without further ado, let’s talk about some types of researchers and how they score on the cash-and-carbon rubric.

  • Armchair research: The armchairist is clearly both low-cash (if you don’t count the sports coats) and low-carbon (if you don’t count the pipe smoke).
  • Field work: “The field” could be anywhere, even the reasonably affordable, accessible, and often charming Queens, the archetypical fieldworker is flying in, first on a jet and then maybe reaches their destination via helicopter or seaplane. Once you’re there though, life in the field is often reasonably affordable, so this scores as low-cash, high-carbon.
  • Experimental psycholinguistics: Experimental psycholinguists have reasonably high capital/startup costs (in the form of eyetracking devices, for instance) and steady marginal costs for running subjects: the subjects themselves may come from the Psych 101 pool but somebody’s gotta be paid to consent them and run them through the task. We’ll call this medium-cash, low-carbon.
  • Neurolinguistics: The neurolinguistic imaging technique du jour, magnetoencephalography (or MEG), requires superconducting coils cooled to a chilly 4.2 K (roughly −452 °F); this in turn is accomplished with liquid helium. Not only is the cooling system expensive and power-hungry, the helium is mostly wasted (i.e., vented to the atmosphere). Helium is itself the second-most common element out there, but we are quite literally running out of the stuff here on Earth. So, MEG, at least, is high-cash, high-carbon.
  • Computational linguistics: there was a time not so long ago when I would said that computational linguists were a bunch of hacky-sackers filling up legal pads with Greek letters (the weirder the better) and typing some kind of line noise they call “Haskell” into ten-year-old Thinkpads. But nowadays, deep learning is the order of the day, and the substantial carbon impact from these methods are well-documented, or at least well-estimated (e.g., Strubell et al. 2019). Now, it probably should be noted that a lot of the worst offenders (BigCos and the Quebecois) locate their data centers near sources of plentiful hydroelectric power, but not all of us live within the efficient transmission zones for hydropower. And of course, graphics processing units are expensive too. So most computational linguistics is, increasingly, high-cash, high-carbon.

On a more serious note, just so you know, unless you run an MEG lab or are working on something called “GPT-G6”, chances are your biggest carbon contributions are the meat you eat, the cars you drive, and the short-haul jet flights you take, not other externalities of your research.

References

Strubell, M., Ganesh, A. and McCallum, A. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650.

“I understood the assignment”

We do a lot of things downstream with the machine learning tool we build, but not always can a model reasonably say it “understood the assignment” in the sense that the classifier is trained to do exactly what it we are making it do.

Take for example, Yuan and Liberman (2011), who study the realization of word-final ing in American English. This varies between a dorsal variant [ɪŋ] and a coronal variant [ɪn].1 They refer to this phenomenon using the layman’s term g-dropping; I will use the notation (ing) to refer to all variants. They train Gaussian mixture models on this distinction, then enrich their pronunciation dictionary so that each word can be pronounced with or without g-dropping; it is as if the two variants are homographs. Then, they perform a conventional forced alignment; as a side effect, it determines which of the “homographs” was most likely used. This does seem to work, and is certainly very clever, but strikes me as a mild abuse of the forced alignment technique, since the model was not so much trained to distinguish between the two variants as produce a global joint model over audio and phoneme sequences.

What would an approach to the g-dropping problem that better understood the assignment look like? One possibility would be to run ordinary forced alignment, with an ordinary dictionary, and then extract all instances of (ing). The alignment would, naturally, give us reasonably precise time boundaries for the relevant segments. These could then be submitted to a discriminative classifier (perhaps an LSTM) trained to distinguish the various forms of (ing). In this design, one can accurately say that the two components, aligner and classifier, understand the assignment. I expect that this would work quite a bit better than what Yuan and Liberman did, though that’s just conjecture at present.

Some recent work by my student Angie Waller (published as Waller and Gorman 2020), involved an ensemble of two classifiers, one which more clearly understood the assignment than the other. The task here was to detect reviews of professors which are objectifying, in the sense that they make off-topic, usually-positive, comments about the professors’ appearance. One classifier makes document-level classifications, and cannot be said to really understand the assignment. The other classifier attempts to detect “chunks” of objectifying text; if any such chunks are found, one can label the entire document as objectifying. While neither technique is particularly accurate (at the document level), the errors they make are largely uncorrelated so an ensemble of the two obtains reasonably high precision, allowing us to track trends in hundreds of thousands of professor reviews over the last decade.

Endnotes

  1. This doesn’t exhaust the logical possibilities of variation; for instance, for some speakers (including yours truly), there is a variant with a tense vowel followed by the coronal nasal.

References

Waller, A. and Gorman, K. 2020. Detecting objectifying language in online professor  reviews. In Proceedings of the Sixth Workshop on Noisy User-Generated Text, pages 171-180.
Yuan, J. and Liberman, M. 2011. Automatic detection of “g-dropping” in American English using forced alignment. In IEEE Workshop on Automatic Speech Recognition & Understanding, pages 490-493.

Anatomy of an analogy

I have posted a lightly-revised version of the handout of a talk I gave at Stony Brook University last November here on LingBuzz. In it, I argue that analogical leveling phenomena in Latin previously attributed to pressures against interparadigmatic analogy or towards phonological process overapplication are better understood as the result of Neogrammarian sound change, loss of productivy, and finally covert reanalysis.

Don’t take money from the John Templeton Foundation

Don’t take money from the John Templeton Foundation. They backed the murderous Chicago School economists, the genocidal architects of the war on Iraq, and are among the largest contributors to the climate change denial movement. That’s all.

Linguistics has its own Sokal affair

The Sokal affair was a minor incident in which physics professor Alan Sokal published a “hoax” (his term) paper in the cultural studies journal Social Text. Sokal’s intent was to demonstrate that reviewers and editors would approve of an article of utter nonsense so long as it obeyed certain preconceived notions, in this case that everything is a social construct. (It is, but that’s a story for another blog.)

The affair has been “read” many ways but it is generally understood to illustrate poor editorial standards at top humanities journals and/or the bankruptcy of the entire cultural studies enterprise. However, I don’t think we have any reason to suspect that either of these critiques are limited to cultural studies and adjacent fields.

I submit that the Pirahã recursion affair has many of the makings of a linguistic Sokal affair. But if anything, the outlook for linguistics is quite a bit worse than the Sokal story. By all accounts, Sokal’s hoax article was a minor scholarly event, and does not seem to have received much attention before it was revealed to be a hoax. In contrast, when Everett’s article first appeared in Current Anthropology in 2005, it received an enormous amount of attention from both scholars and the press, and ultimately led to to multiple books, including a sympathetic portrait of Everett and his work by none other than the late Tom Wolfe (bang! krrp!). Finally, nearly all of what Everett has written on the subject is manifest nonsense.

I believe many scholars in linguistics and adjacent fields found Everett’s claim compelling, and while I think linguists should have seen through the logical leaps and magical thinking in the Current Anthropology piece, it wasn’t until a few years later, after the exchange with Nevins et al. in Language, that the empirical issues (to put it mildly) with Everett’s claims came to light. But the key element which gave Everett’s work such influence is that, like Sokal intended his hoax to do, it played to the biases (anti-generativist, and particularly, anti-Noam Chomsky) of a wide swath of academics (and to a lesser degree, fans of US empire, like Tom Wolfe). In that regard, it scarcely matters whether Everett himself believes or believed what he wrote: we have all been hoaxed.

What phonotactics-free phonology is not

In my previous post, I showed how many phonological arguments are implicitly phonotactic in nature, using the analysis of the Latin labiovelars as an example. If we instead adopt a restricted view of phonotactics as derived from phonological processes, as I argue for in Gorman 2013, what specific forms of argumentation must we reject? I discern two such types:

  1. Arguments from the distribution of phonemes in URs. Early generative phonologists posited sequence structure constraints, constraints on sequences found in URs (e.g, Stanley 1967, et seq.). This seems to reflect more the then-contemporary mania for information theory and lexical compression, ideas which appear to have lead nowhere and which were abandoned not long after. Modern forms of this argument may use probabilistic constraints instead of categorical ones, but the same critiques remain. It has never been articulated why these constraints, whether categorical or probabilistic, are considered key acquirenda. I.e., why would speakers bother to track these constraints, given that they simply recapitulate information already present in the lexicon. Furthermore, as I noted in the previous post, it is clear that some of these generalizations are apparent even to non-speakers of the language; for example, monolingual New Zealand English speakers have a surprisingly good handle on Māori phonotactics despite knowing few if any Māori words. Finally, as discussed elsewhere (Gorman 2013: ch. 3, Gorman 2014), some statistically robust sequence structure constraints appear to have little if any effect on speakers judgments of nonce word well-formedness, loanword adaptation, or the direction of language change.
  2. Arguments based on the distribution of SRs not derived from neutralizing alternations. Some early generative phonologists also posited surface-based constraints (e.g., Shibatani 1973). These were posited to account for supposed knowledge of “wordlikeness” that could not be explained on the basis of constraints on URs. One example is that of German, which has across-the-board word-final devoicing of obstruents, but which clearly permits underlying root-final voiced obstruents in free stems (e.g., [gʀaːt]-[gʀaːdɘ] ‘degree(s)’ from /grad/). In such a language, Shibatani claims, a nonce word with a word-final voiced obstruent would be judged un-wordlike. Two points should be made here. First, the surface constraint in question derives directly from a neutralizing phonological process. Constraint-based theories which separate “disease” and “cure” posit a  constraint against word-final obstruents, but in procedural/rule-based theories there is no reason to reify this generalization, which after all is a mere recapitulation of the facts of alternation, arguably more a more entrenched source of evidence for grammar construction. Secondly, Shibatani did not in fact validate his claim about German speakers’ in any systematic fashion. Some recent work by Durvasula & Kahng (2019) reports that speakers do not necessarily judge a nonce word to be ill-formed just because it fails to follow certain subtle allophonic principles.

References

Durvasula, K. and Kahng, J. 2019. Phonological acceptability is not isomorphic with phonological grammaticality of stimulus. Talk presented at the Annual Meeting on Phonology.
Gorman, K. 2013. Generative phonotactics. Doctoral dissertation, University of Pennsylvania.
Gorman, K. 2014.  A program for phonotactic theory. In Proceedings of the Forty-Seventh Annual Meeting of the Chicago Linguistic Society: The Main Session, pages 79-93.
Shibatani, M. 1973. The role of surface phonetic constraints in generative phonology. Language 49(1): 87-106.
Stanley, R. 1967. Redundancy rules in phonology. Language 43(2): 393-436.

Towards a phonotactics-free phonology

Early generative phonology had surprisingly little to say about the theory of phonotactics. Chomsky and Halle (1965) claim that English speakers can easily distinguish between real words like brick, well-formed or “possible” nonce words like blick, and ill-formed or “impossible” nonce words like bnick. Such knowledge must be in part language-specific, since, for instance, [bn] onsets are in some languages—Hebrew for instance—totally unobjectionable. But few attempts were made at the time to figure out how to encode this knowledge.

Chomsky and Halle, and later Stanley (1967), propose sequence structure constraints (SSCs), generalizations which encode sequential redundancies in underlying representations.1 Chomsky and Halle (p. 100) hypothesize that such generalizations might account for the ill-formedness of bnick: perhaps English consonants preceded by a word-initial obstruent must be liquids: thus blick but not bnick. Shibatani (1973) claims that not all language-specific generalizations about (im)possible words can derive from restrictions on underlying representations and must (instead or also) be expressed in terms of restrictions on surface form. For instance, in German, obstruent voicing is contrastive but neutralized word-finally; e.g., [gʀaːt]-[gʀaːtɘ] ‘ridge(s) vs. [gʀaːt]-[gʀaːdɘ] ‘degree(s)’. Yet, Shibatani claims that German speakers supposedly judge word-final  voiced obstruents, as in the hypothetical but unattested [gʀaːd], to be ill-formed. Similar claims were made by Clayton (1976). And that roughly exhausts the debate at the time. Many years later, Hale and Reiss can, for instance, deny that that this kind of knowledge is part of the narrow faculty of language.

Even if we, as linguists, find some generalizations in our description of the lexicon, there is no reason to posit these generalizations as part of the speaker’s knowledge of their language, since they are computationally inert and thus irrelevant to the input-output mappings that the grammar is responsible for. (Hale and Reiss 2008:17f.)

Many years later, Charles Reiss (p.c.) proposed to me a brief thought experiment. Imagine that you were to ask a naïve non-linguist monolingual English speaker to discern whether a short snippet of spoken language was either, say, Māori or Czech. Would you not expect that such a speaker would do far better than chance, even if they themselves do not know a single word in either language? Clearly then, (at least some form of) phonotactic knowledge can be acquired extremely indirectly, effortlessly, without any substantial exposure to the language, and does not imply any deep knowledge of the grammar(s) in question.2

In a broader historical context, though, early generativists’ relative disinterest in phonotactic theory is something of a historical anomaly. Structuralist phonologists, in developing phonemicizations, were at least sometimes concerned with positing phonemes that have a restricted distribution. And for phonologists working in strains of thinking that ultimately spawned Harmonic Grammar and Optimality Theory, phonotactic generalizations are to a considerable degree what phonological grammars are made of.

A phonological theory which rejects phonotactics as part of the narrow language faculty—as do Hale and Reiss—is one which makes different predictions than theories which do include it, if only because such an assumption necessarily excludes certain sources of evidence. Such a grammar cannot make reference to generalizations about distributions of phonemes that are not tied to allophonic principles or to alternations. Nor can it make reference to the distribution of contrast except in the presence of neutralizing phonological processes.

I illustrated this point very briefly in Gorman 2014 with a famous case from Sanskrit (the so-called diaspirate roots); here I’d like to provide more detailed example using a language I know much better, namely Latin. Anticipating the conclusions drawn below, it seems that nearly all the arguments mustered in this well-known case are phonotactic in nature and are irrelevant in a phonotactics-free theory of phonology.

In Classical Latin, the orthographic sequence qu (or more specifically <QV>) denotes the sound [kw].Similarly, gu is ambiguously either [gu] as in exiguus [ek.si.gu.us] ‘strict’ or [gw] as in anguis [aŋ.gwis] ‘snake’. For whatever reason, it seems that is gu was pronounced as [gw] if and only if it is preceded by an n. It is not at all clear if this should be regarded as an orthographic generalization, a phonological principle, or a mere accident of history.

How should the labiovelars qu and (post-nasal) gu be phonologized? This topic has been the subject of much speculation. Devine and Stephens (1977) devoted half a lengthy book to the topic, for instance. More recently, Cser’s (2020: 22f.) phonology of Latin reconsiders the evidence, revising an earlier presentation (Cser 2013) of these facts. In fact three possibilities are imaginable: qu, for instance, could be unisegmental /kʷ/, bisegmental /kw/, or even /ku/ (Watbled 2005), though as Cser correctly observes, the latter does not seem to be workable. Cser reluctantly concludes that the question is not yet decidable. Let us consider this question briefly, departing from Cser’s theorizing only in the assumption of a phonotactics-free phonology.

  1. Frequency. Following Devine and Stephens, Cser notes that the lexical frequency of qu greatly exceeds that of k and glide [w] (written u) in general. They take this as evidence for unisegmental /kʷ, gʷ/. However, it is not at all clear to me why this ought to matter to the child acquiring Latin. In a phonotactics-free phonology, there is no simply reason for the learner to attend to this statistical discrepancy. 
  2. Phonetic issuesCser reviews testimonia from ancient grammarians suggesting that the “[w] element in <qu> was less consonant-like than other [w]s” (p. 23). However, as he points out, this is trivially handled in the unisegmental analysis and is a trivial example of allophony in the bisegmental analysis. 
  3. Geminates. Cser points out that the labiovelars, unlike all consonants but [w], fail to form intervocalic geminates. However, phonotactics-free phonology has no need to explain which underlying geminates are and are not allowed in the lexicon.
  4. Positional restrictions. Under a bisegmental interpretation, the labiovelars are “marked” in that obstruent-glide sequences are rare in Latin. On the other hand, under a unisegmental interpretation, the absence of word-final labiovelars is unexpected. However, both of thes observations have no status in phonotactics-free phonology.
  5. The question of [sw]. The sequence [sw] is attested initially in a few words (e.g., suāuis ‘sweet’). Is [sw] uni- or bisegmental?  Cser notes that were one to adopt a unisegmental analysis for the labiovelars qu and gu, [sw] is the only complex onset in which [w] may occur. However, an apparently restricted distribution for [w] has no evidentiary status in phonotactics-free phonology; it can only be a historical accident encoded implicitly in the lexicon.
  6. Verb root structure. Devine and Stephens claim that verb roots ending in a three-consonant sequence are unattested except for roots ending in a sonorant-labiovelar sequence (e.g., torquere ‘to turn’, tinguere ‘to dip’). While this is unexplained under a bisegmental analysis, this is an argument based on distributional restrictions that have no status in phonotactics-free phonology. 
  7. Voicing contrast in clusters. Voicing is contrastive in Latin nasal-labiovelar clusters, thus linquam ‘I will/would leave’ (1sg. fut./subj. act.) linguam ‘tongue’ (acc.sg.). According to Cser, under the biphonemic analysis this would be the only context in which a CCC cluster has contrastive voicing, and “[t]his is certainly a fact that points towards the greater plausibility of the unisegmental interpretation of labiovelars” (p. 27). It is is not clear that the distribution of voicing contrasts ought to be taken into account in a phonotactics-free theory, since there is no evidence for a process neutralizing voicing contrasts in word-internal trisegmental clusters.
  8. Alternations. In two verbs, qu alternates with cū [kuː] in the perfect participle (ppl.): loquī ‘to speak’ vs. its ppl. locūtus and sequī ‘to follow’ vs. its ppl. secūtus. Superficially this resembles alternations in which [lv, bv, gv] alternate with [luː, buː, guː] in the perfect participle. This suggests a bisegmental analysis, and since this is based on patterns of alternation, is consistent with a phonotactics-free theory. On the other hand, qu also alternates with plain c [k]. For example, consider the verb coquere ‘to cook’, which has a past participle coctus. Similarly, the verb relinquere ‘to leave’ has a perfect participle relictus, but the loss of the Indo-European “nasal insert” (as it is known) found in the infinitive may suggest an alternative—possibly suppletive—analysis. Cser concludes, and I agree, that this evidence is ambiguous.
  9. ad-assimilation. The prefix ad- variably assimilates in place and manner to the following stem-initial consonant. Cser claims that this is rare with qu-initial stems (e.g., unassimilated adquirere ‘to acquire’ is far more frequent than assimilated acquirere in the corpus). This is suggestive of a bisegmental analysis insofar as ad-assimilation is extremely common with [k]-initial stems. This seems to weakly supports the bisegmental analysis.5
  10. Diachronic considerations. Latin qu is a descendent of the Indo-European *kʷ, one member of a larger labiovelar series. All members of this series appear to be unisegmental in the proto-language. However, as Cser notes, this is simply not relevant for the synchronic status of qu and gu.
  11. Poetic licence. Rarely the poets used a device known as diaeresis, the reading of [w] as [u] to make the meter. Cser claims this does not obtain for qu. This is weak evidence for the unisegmental analysis because the labial-glide portion of /kʷ/ would not obviously be in the scope of diaeresis.
  12. The distribution of gu. As noted above the voiced labiovelar gu is lexically quite rare, and always preceded by n. In a phonological theory which attends to phonotactic constraints, this is an explanandum crying out for an explanans. Cser argues that it is particularly odd under the unisegmental analysis because there is no other segment so restricted. But in phonotactics-free phonology, there is no need to explain this accident of history.

Cser concludes that this series of arguments are largely inconclusive. He takes (7, 11) to be evidence for the unisegmental analysis, (3, 5, 8, 9) to be evidence for the bisegmental analysis, and all other points to be largely inconclusive. Reassessing the evidence in a phonotactics-free theory, only (9) and (11), both based on rather rare evidence, remain as possible arguments for the status of the labiovelars. I too have to regard the evidence as inconclusive, though I am now on the lookout for diaeresis of qu and gu, and hope to obtain a better understanding of prefix-final consonant assimilation.

Clearly, working phonologists are heavily dependent on phonotactic arguments, and rejecting them as explanations would substantially limit the evidence base used in phonological inquiry.

Endnotes

  1. In part this must reflect the obsession with information theory in linguistics at the time. Of this obsession Halle (1975) would later write that this general approach was “of absolutely no use to anyone working on problems in linguistics” (532).
  2. As it happens, monolingual English-speaking New Zealanders are roughly as good at discriminating between “possible” and “impossible” Māori nonce words as are Māori speakers (Oh et al. 2020).
  3. I write this phonetically as [kw] rather than [kʷ] because it is unclear to me how the latter might differ phonetically from the former. These objections do not apply to the phonological transcription /kʷ/, however.
  4. Recently Gouskova and Stanton (2021) have revived this theory and applied it to a number of case studies in other languages. 
  5. It is at least possible that that unassimilated spellings are “conservative” spelling conventions and do not reflect speech. If so, one may still wish to explain the substantial discrepency in rates of (orthographic) assimilation to different stem-initial consonants and consonant clusters. 

References

Chomsky, N. and Halle, M. 1965. Some controversial questions in phonological theory. Journal of Linguistics 1(2): 97-138.
Clayton, M. L. 1976. The redundance of underlying morpheme-structure conditions. Language 52(2): 295-313.
Cser, A. 2013. Segmental identity and the issue of complex segments. Acta Linguistica Hungarica 60(3): 247-264.
Cser, A. 2020. The Phonology of Classical Latin. John Wiley & Sons.
Devine, A. M. and Stephens, L. D. 1977. Two Studies in Latin Phonology. Anma Libri.
Gorman, K. 2013. Generative phonotactics. Doctoral dissertation, University of Pennsylvania.
Gorman, K. 2014. A program for phonotactic theory. In Proceedings of the Forty-Seventh Annual Meeting of the Chicago Linguistic Society: The Main Session, pages 79-93.
Gouskova, M. and Stanton, Juliet. 2021. Learning complex segments. Language 97(1):151-193.
Hale, M. and Reiss, C. 2008. The Phonological Enterprise. Oxford University Press.
Halle, M. 1975. Confessio grammatici. Language 51(3): 525-535.
Oh, Y., Simon, T., Beckner, C., Hay, J., King, J., and Needle, J. 2020. Non-Māori-speaking New Zealanders have a Māori proto-lexicon. Scientific Reports 10: 22318.
Shibatani, M. 1973. The role of surface phonetic constraints in generative phonology. Language 49(1): 87-106.
Stanley, R. 1967. Redundancy rules in phonology. Language 43(2): 393-436.
Watbled, J.-P. 2005. Théories phonologiques et questions de phonologie latine. In C. Touratier (ed.), Essais de phonologie latine, pages 25-57. Publications de l’Université de Provence.

Surprises for the new NLP developer

There are a couple things that surprise students when they first begin to develop natural language processing applications.

  • Some things just take a while. A script that, say, preprocesses millions of sentences isn’t necessarily wrong because it takes a half hour.
  • You really do have to avoid wasting memory. If you’re processing a big file line-by-line,
    • you really can’t afford to read it all in at once, and
    • you should write out data as soon as you can.
  • The OS and program already know how to buffer IO; don’t fight it.
  • Whereas so much software works with data in human non-readable (e.g., wire formats, binary data) or human-hostile (XML) formats, if you’re processing text files, you can just open the files up and read them to see if they’re roughly what you expected.