How does linguistics contribute to speech & language processing? While there exist some “linguist eliminationists”, who wish to process speech audio or text “from scratch” without intermediate linguistic representations, it is generally recognized that linguistic representations are the end goal of many processing “tasks”. Of course some tasks involve poorly-defined, or ill-posed, end-state representations—the detection of hate speech and named entities, neither of which are particularly well-defined, linguistically or otherwise, come to mind—but are driven by apparent business value to be extracted rather than serious goals to understand speech or text.
The standard example for this kind of argument is syntax. It might be the case that syntactic representations are not as useful for textual understanding as was anticipated, and useful features for downstream machine learning can apparently be induced using far simpler approaches, like the masked language modeling task used for pre-training in many neural models. But it’s not as if a terrorist cell of rogue linguists locked NLP researchers in their office until they developed the field of natural language parsing. NLP researchers decided, of their own volition, to spend the last thirty years building models which could recover natural language syntax, and ultimately got pretty good at it, probably getting up to the point where, I suspect, unresolved ambiguities mostly hinge on world knowledge that is rarely if ever made explicit.
Let us consider another example, less widely discussed: the phoneme. The phoneme was discovered in the late 19th century by Baudouin de Courtenay and Kruszewski. It has been around a very long time. In the century and a half since it emerged from the Polish academy, Poland itself has been a congress, a kingdom, a military dictatorship, and a republic (three times), and annexed by the Russian empire, the German Reich, and the Soviet Union. The phoneme is probably here to stay. The phoneme is, by any reasonable account, one of the most successful scientific abstractions in the history of science.
It is no surprise then, that the phoneme plays a major role in speech technologies. Not only did the first speech recognizers and synthesizers make explicit use of phonemic representations (as well as notions like allophones), so did the next five decades worth of recognizers and synthesizers. Conventional recognizers and synthesizers require large pronunciation lexicons mapping between orthographic and phonemic form, and as they get closer to speech, convert these “context-independent” representations of phonemic sequences onto “context-dependent” representations which can account for allophony and local coarticulation, exactly as any linguist would expect. It is only in the last few years that it has even become possible to build a reasonably effective recognizer or synthesizer which doesn’t have an explicit phonemic level of representation. Such models instead use clever tricks and enormous amounts of data to induce implicit phonemic representations instead. We have every reason to suspect these implicit representations are quite similar to the explicit ones linguists would posit. For one, these implicit representations are keyed to orthographic characters, and as I wrote a month ago, “the linguistic analysis underlying a writing system may be quite naïve but may also encode sophisticated phonemic and/or morphemic insights.” If anything, that’s too weak: in most writing systems I’m aware of, the writing system is either a precise phonemic analysis (possibly omitting a few details of low functional load, or using digraphs to get around limitations of the alphabet of choice) or a precise morphophonemic analysis (ditto). For Sapir (1925, et. seq.) this was key evidence for the existence of phonemes! So whether or not implicit “phonemes” are better than explicit ones, speech technologists have converged on the same rational, mentalistic notions discovered by Polish linguists a century and a half ago.
So it is surprising to me that even those schooled in the art of speech processing view the contribution of linguistics to the field in a somewhat negative light. For instance, Paul Taylor, the founder of the TTS firm Phonetic Arts, published a Cambridge University Press textbook on TTS methods in 2009, and while it’s by now quite out of date, there’s no more-recent work of comparable breadth. Taylor spends the first five hundred (!) pages or so talking about linguistic phenomena like phonemes, allophones, prosodic phrases, and pitch accents—at the time, the state of the art in synthesis made use of explicit phonological representations—so it is genuinely a shock to me that Taylor chose to close the book with a chapter (Taylor 2009: ch. 18) about the irrelevance of linguistics. Here are a few choice quotes, with my commentary.
It is widely acknowledged that researchers in the field of speech technology and linguistics do not in general work together. (p. 533)
It may be “acknowledged”, but I don’t think it has ever been true. The number of linguists and linguistically-trained engineers working on FAANG speech products every day is huge. (Modern corporate “AI” is to a great degree just other people, mostly contractors in the Global South.) Taylor continues:
The first stated reason for this gap is the “aeroplanes don’t flap their wings” argument. The implication of this statement is that, even if we had a complete knowledge of how human language worked, it would not help us greatly because we are trying to develop these processes in machines, which have a fundamentally different architecture. (p. 533)
I do not expect that linguistics will provide deep insights about how to build TTS systems, but it clearly identified the relevant representational units for building such systems many decades ahead of time, just as mechanics provided the basis for mechanical engineering. This was true of Kempelen’s speaking machine (which predates phonemic theory, and so had to discover something like it) and Dudley’s voder as well as speech synthesizers in the digital age. So I guess I kind of think that speech synthesizers do flap their wings: parametric, unit selection, hybrid, and neural synthesizers are all big fat phoneme-realization machines. As is standard practice in physical sciences, the simple elementary particles of phonological theory—phonemes, and perhaps features—were discovered quite early on, but it the study of their onotology has taken up the intervening decades. And unlike the physical sciences, us cognitive scientists some day must also understand their epistemology (what Chomsky calls “Plato’s problem”) and ultimately, their evolutionary history (“Darwin’s problem”) too. Taylor, as an engineer, need not worry himself about these further studies, but I think he is being widely uncharitable about the nature of what he’s studying, or the business value of having a well-defined hypothesis space of representations for his team to engineer around in.
Taylor’s argument wouldn’t be complete without a caricature of the generative enterprise:
The most-famous camp of all is the Chomskian [sic] camp, started of course by Noam Chomsky, which advocates a very particular approach. Here data are not used in any explicit sense, quantitative experiments are not performed and little stress is put on explicit description of the theories advocated. (p. 534)
This is nonsense. Linguistic examples are data, in some cases better data than results from corpora or behavioral studies, as the work of Sprouse and colleagues has shown. No era of generativism was actively hostile to behavioral results; as early as the ’60s, generativist-aligned psycholinguists were experimentally testing the derivational theory of complexity and studying morphological decomposition in the lab. And I simply have never found that generativist theorizing lacks for formal explicitness; in phonology, for instance, the major alternative to generativist thinking is exemplar theory—which isn’t even explicit enough to be wrong—and a sort of neo-connectionism—which ought not to work at all given extensive proof-theoretic studies of formal learnability and the formal properties of stochastic gradient descent and backpropagation. Taylor continues to suggest that the “curse of dimensionality” and issues of generalizability prevent application of linguistic theory. Once again, though, the things we’re trying to represent are linguistic notions: machine learning using “features” or “phonemes”, explicit or implicit, is still linguistics.
Taylor concludes with some future predictions about how he hopes TTS research will evolve. His first is that textual analysis techniques from NLP will become increasingly important. Here the future has been kind to him: they are, but as the work of Sproat and colleagues has shown, we remain quite dependent on linguistic expertise—of a rather different and less abstract sort than the notion of the phoneme—to develop these systems.
References
Sapir, E. 1925. Sound patterns in language. Language 1:37-51.
Taylor, P. 2009. Text-to-Speech Synthesis. Cambridge University Press.