Another pseudo-decipherment of the Voynich manuscript

The Voynich manuscript consists of 240 pages of text and fanciful illustrations written in an unknown script. It is first mentioned in the 16th century, then largely disappears from the record for several centuries, only to resurface in for sale in 1903. An independent carbon dating assigns a early 15th century date to the vellum, but some scholars speculate it may have been inked or re-inked at a later date. Other scholars believe it to be an elaborate hoax or forgery. A recent paper in Transactions of the Association for Computational Linguistics (TACL) by Bradley Hauer & Grzegorz Kondrak (henceforth H&K) entitled Decoding Anagrammed Texts Written in an Unknown Language was touted to have enabled a decipherment of the Voynich. Have H&K succeeded where others have failed? Unfortunately, having reviewed the paper carefully, I can say with some certainty that they they have not.

H&K propose two techniques towards decipherment. First, they describe methods to determine the underlying language of a plaintext using only the ciphertext, assuming a simple bijective substitution cipher. Their preferred method does not depend on the linear order of strings within the ciphertext, and thus works equally well when the ciphertext characters have been permuted within words (assuming that word boundaries are somehow clearly delimited in the ciphertext), a point which will be become important shortly. Then, they describe methods for cryptanalysis when the encipherment consists of a bijective substitution cipher under certain degenerate conditions, such as where the ciphertext lacks vowels, or where the ciphertext characters have been randomly permuted within words.

That much is fine (though I have some quibbles with the details, as you’ll see). My major issue with H&K is that they don’t provide any evidence that the Voynich is so encoded, they simply assume it. And, despite the press hype, their preferred method fails to produce anything remotely readable.

I don’t have much to say about their method for identifying source language; it is a relatively novel task—they only cite one prior work—and their method and evaluation both appear to be sound. I appreciate that their evaluation includes a brute force-like method of simply attempting to decipher the text as a given language, and as a topline, an “oracle” scenario in which the decipherment is known and the problem reduces to standard language ID. But I was struck by the following claim about their decipherment method (p. 79):

“We conclude that our greedy-swap algorithm strikes the right balance between accuracy and speed required for the task of cipher language identification.”

It’s hard for me to imagine in what sense “cipher language identification” might be considered something which needs to be fast (rather than merely feasible). I think, in contrast, we would be just fine with using supercomputers for this task if it worked.[1]

So what does their preferred method say about the plaintext language of the Voynich? It assigns, by far, the highest probability to Hebrew.[2,3] Naturally, the oracle scenario is inapplicable here; whereas most archaeological decipherments have worked from a small set of candidate languages for the plaintext, there is nothing like a consensus regarding the language of the Voynich.

H&K then consider methods for decipherment itself. This problem is essentially a type of unsupervised machine learning in which the objective is to identify a mapping from ciphertext to plaintext (a key) such that for the ciphertext, we maximize the probability of the plaintext with respect to some language model. Kevin Knight and colleagues have, in the last two decades, proposed three distinct applications for this scenario:

  • Unsupervised translation: Knight & Graehl (1998) use this scenario to learn low-resource transliteration models, and some subsequent work has applied this to other low-resource, small-vocabulary tasks, but as of yet such methods don’t scale well to machine translation in general.
  • Steampunk cryptanalysis: Knight et al. (2006) use this scenario for unknown-plaintext cryptanalysis of bijective substitution ciphers, and subsequent work has also applied this to homophonic and running key ciphers. But the aforementioned ciphers have been known to be vulnerable to pencil-and-paper attacks for a century or more, and it’s not clear that these methods are effective attacks against any cryptosystem in widespread use today.
  • Archaelogical decipherment: Snyder et al. (2010) attempt to simulate the automatic decipherment of Ugaritic, a Semitic language written in a cuneiform script in the 14th through the 12th century BCE; these were manually deciphered in 1929-1931 by exploiting the language’s strong similarity to biblical Hebrew. Knight et al. (2012) show that an undeciphered 18th century manuscript is in fact a description of a Masonic ritual written in German and encoded using a homophonic cipher. However, others have argued that computational methods for archaelogical decipherment are still quite limited. For instance, Sproat (2010a,b, 2014) draws attention to the unsolved problem of determining whether a symbol system represents language in the first place, and to the long history of pseudo-decipherment.

Regardless of the application, it should be obvious that decipherment is a computationally challenging problem. Formally, given a bijective cipher over an alphabet K, the keyspace has size |K|!, since each candidate key is a permutation of K. Three classes of methods are found in the literature:

  • Integer linear programming (ILP; e.g., Ravi & Knight 2010)
  • A linear relaxation of the ILP to expectation maximization or related methods (e.g., Knight et al. 2006)
  • Search-based techniques using a beam or tree (e.g., Hauer et al. 2014)

H&K’s preferred method is a case of the latter; they refer to this prior work as “state-of-the-art”, but skimming Hauer et al. suggests they are state-of-the-art on decrypting snippets of text randomly sampled from the English Wikipedia article “History“. This doesn’t strike me as an acceptable benchmark, even if there’s some precedent in the literature, and what’s worse is that they use snippets as short as two characters, which are well below the well-known theoretical bound.

H&K propose two adaptations of the Hauer et al. model. First, they consider a variant which can handle ciphertext in which characters have been permuted (“anagrammed”) within words (assuming that word boundaries are clearly delimited in the ciphertext and are the same as word boundaries in the plaintext). H&K they mention that this has been suggested in prior Voynichology—though this might well be pure speculation, since we can’t read the manuscript—but do not themselves argue that the Voynich is anagrammed. Random permutation of letters within words strikes me as a poor cryptographic strategy due to the non-determinism it introduces. Rof nnastcie anc uyo adre shti eetnencs? That’s hard to read, in my opinion, though not complete impossible with enough context.[4] While I can’t really put myself into the mind of the creators of the Voynich manuscript, it seems that a wide degree of hermeneutic freedom is undesirable in most written genres, even texts of, say, an occult nature: you don’t want to accidentally turn yourself into a newt! Secondly, H&K adapt their model so that it can restore vowels omitted in the plaintext.[5] They refer to the resulting ciphertext with vowels omitted as an “abjad”, using a rare term of art for consonantal writing systems, i.e., those in which vowels are omitted. Phoenician, the ancestor of the Greek & Latin alphabets, did not originally write vowels at all, but they are inconsistently present in later texts and both Hebrew & Arabic write certain vowels. In Standard Arabic, for example, all long vowels are written explicitly, and Hebrew during the Renaissance era was normally written with the Tiberian diacriticization (or niqqud) developed several centuries earlier. H&K seem to be assuming a total omission of vowels which would be both anachronistic and typologically rare, and had H&K mentioned either of those facts in a brief disclaimer admitting to their slight abuse of terminology, I’d wouldn’t think they weren’t mislead, or misleading the reader, about what an abjad (normally) is.

It seems to me that H&K have, at this point, taken a method-free leap of faith towards the hypothesis that the Voynich is vowel-less Hebrew, anagrammed and encoded with a bijective substitution cipher. Perhaps I’d be willing to forgive it if these assumptions allowed them to produce some readable plaintext. Here’s what they have to say about that (p. 84):

“None of the decipherments appear to be syntactically correct or semantically consistent. […] The first line of the VMS [Voynich manuscript]…is deciphered into Hebrew as ועשה לה הכה איש אליו לביחו ו עלי אנשי המצות. According to a native speaker of the language, this is not quite a coherent sentence. However, after making a couple of spelling corrections, Google Translate is able to convert it into passable English: ‘She made recommendations to the priest, man of the house and me and people.'”

So the authors, neither of whom apparently are native speakers of Hebrew, post-edited the output of their system until the MT decoder produced this sentence. As others have noted, this is not an acceptable method—modern MT systems are extremely good at producing locally coherent text from degenerate input.

H&K suggest two possible interpretations of their results: “the results presented in this section could be interpreted either as tantalizing clues for Hebrew as the source language of the VMS, or simply as artifacts of the combinatoric power of anagramming and language models.” (p. 84f.) So they are not really claiming, at least in this article, a decipherment—that’s an addition of the subsequent, irresponsible press coverage, for which I can’t really blame H&K—but I can’t imagine calling this “tantalizing”. I don’t see any reason to think H&K have any confidence in their decipherment, either: they don’t provide more than a single plaintext sentence, and don’t provide a key. Had I been asked to review this paper, I would have requested that the portion of the paper dealing with language identification employ corpora of non-linguistic symbol systems (such as those in Sproat 2014), and I would have insisted that the portion of the paper dealing with the decipherment of the Voynich be essentially scrapped. The Voynich angle is a red herring: there is nothing here. Had they just removed it, this would have been a perfectly good TACL paper!

In 2010, my colleague Richard Sproat wrote a brief article for the journal Computational Linguistics (Sproat 2010b) which reviewed a recent paper by Rao et al. (2009), published in the journal Science. Rao et al. claim to provide statistical evidence that the the Indus Valley seals are a writing system. Now there are quite a few reasons to suspect the seals are not writing under any common-sense definition thereof. More importantly, though, Rao et al.’s method fails to discriminate between linguistic and non-linguistic symbol systems (see, e.g., Sproat 2014). Sproat implies that had the Science editors simply retained computational linguists as referees, they would have been made aware of the manifest flaws of Rao et al.’s paper and would thus have rejected it. With respect to my colleague, he has been shown wrong on both counts. First, when these journals retain computational linguist referees, they simply ignore negative reviews of technically-flawed, linguistically-oriented work when it has sufficient “woo factor”. Secondly, woo factor trumps lack of method even in the one of the top journals for computational linguistics and natural language processing, one which I review for and publish in. Some recent research suggests that fanciful university press releases are a key contributor to scientific hype. As far as I can tell, that is what happened here: the “tantalizing clues” in a flawed journal article were wildly exaggerated by the University of Alberta press office, and major publications took the press release at its hyperbolic word.

PS: If you’re interested in more wild speculation about the Voynich manuscript, may I suggest you check out @voynich_bot on Twitter?

Acknowledgements

Thanks to Brian Roark & Richard Sproat for feedback on this.

Endnotes

[1] The hacks at the Daily Mail are rather confused here; Carmel isn’t a supercomputer—it’s a free software package for doing expectation maximization over finite-state transducers—and at worst you might want to run these kinds of experiments using a top-of-the-line microcomputer, possibly with a powerful graphics card (e.g., Berg-Kirkpatrick & Klein 2013).

[2] An alternative method prefers Mazatec, a which H&K correctly reject as chronologically implausible; a couple other top possibilities are Mozarabic, Italian, and Ladino, which H&K consider “plausible”. Mozarabic is an extinct Romance language that was spoken (but only rarely written) by Christians living in Moorish Spain; it is unclear whether H&K are using the Arabic or the Roman orthography (neither were really standard). Ladino was spoken in the same region and time period but by the Sephardic population; it was written using Hebrew characters. As far as I know, both languages would have declined rapidly after the conclusion of the Reconquista, which imposes a terminus ante quem of roughly 1492, if either is the plaintext language of the Voynich.

[3] For reasons unclear to me they only use 43 pages of the manuscript in their Voynich experiments. This seems like a major flaw to me. Had I been asked to review this paper, I would requested a justification.

[4] To wit, in the CMU dictionary, 17% of six-character words are an anagram of at least one other word, and there are no less than fifteen anagrams of the sequence AEIMNR.

[5] H&K claim that one can’t use the linear relaxation method to restore vowels. I don’t see why, though. If the hypothesis space is expressed as a single-state weighted finite-state transducer, and the plaintext vowels are simply mapped to epsilon, then everything proceeds as normal. In fact I am running such an experiment with a ciphertext consisting of an “abjad” (no-vowel) rendering of the Gettysburg Address. I use a variant of the Knight et al. (2006) approach with Baum-Welch training and forward-backward decoding rather than their Viterbi approximations (software here). Because the resulting lattice is cyclic, the shortest-distance computation during the E-step is more complex than normal, but it does basically work. This is to be expected: you prbbly hv lttl trbl rdng txt tht lks lk ths. Experimental results forthcoming.

References

Berg-Kirkpatrick, Taylor; Klein, Dan. 2013. Decipherment with a million random restarts. In EMNLP, pages 874-878.
Hauer, Bradley; Hayward, Ryan; Kondrak, Grzegorz. 2014. Solving substitution ciphers with combined language models. In COLING, pages 2314-2325.
Hauer, Bradley; Kondrak, Grzegorz. 2016. Decoding anagrammed texts written in an unknown language. Transactions of the Association For Computational Linguistics 4: 75-86.
Knight, Kevin; Graehl, Jonathan. 1998. Machine transliteration. Computational Linguistics 24(4): 599-612.
Knight, Kevin; Nair, Anish; Rathod, Nishi; Yamada, Kenji. 2006. Unsupervised analysis for decipherment problems. In COLING, pages 499-506.
Knight, Kevin; Megyesi, Beáta; Schaefer, Christiane. 2012. The secrets of the Copiale cipher. Journal for Research into Freemasonry 2(2): 314-324.
Ravi, Sujith; Knight, Kevin. 2008. Attacking decipherment problems optimally with low-order n-gram models. In EMNLP, pages 812-819.
Rao, Rajesh; Yadav, Nisha; Vahia, Mayank; Joglekar, Hrishikesh; Adhikari, R.; Mahadevan, Iravatham. 2009. Entropic evidence for linguistic structure in the Indus script. Science 342(5931): 1165.
Snyder, Ben; Barzilay, Regina; Knight, Kevin. 2010. A statistical model for lost language decipherment. In ACL, pages 1048-1057.
Sproat, Richard. 2010a. Language, Technology, and Society. Oxford: Oxford University Press.
Sproat, Richard. 2010b. Ancient symbols, computational linguistics, and the reviewing practices of the general science journals. Computational Linguistics 36(3): 585-594.
Sproat, Richard. 2014. A statistical comparison of written language and nonlinguistic symbol systems. Language 90(2): 457-481.

What to do about the academic brain drain

The academy-to-industry brain drain is very real. What can we do about it?

Before I begin, let me confess my biases. I work in the research division of a large tech company (and I do not represent their views). Before that, I worked on grant-funded research in the academy. I work on speech and language technologies, and I’ll largely confine my comments to that area.

[Content warnings: organized labor, name-calling.]

Salary

Fact of the matter is, industry salaries are determined by a relatively-efficient labor market. Academy salaries are compressed, with a relatively firm ceiling for all but a handful of “rock star” faculty. The vast majority of technical faculty are paid substantially less than they’d make if they just took the very next industry offer that came around. It’s even worse for research professors who depend on grant-based “salary support” in a time of unprecedented “austerity”—they can find themselves functionally unemployed any time a pack of incurious morons seem to end up in the White House (as seems to happen every eight years or so).

The solution here is political. Fund the damn NIH and NSF. Double—no, triple—their funding. Pay for it by taxing corporations and the rich, or, better yet, divert some money from the Giant Death Machines fund. Make grant support contractual, so PIs with a five-year grant are guaranteed five years of salary support and a chance to realize their vision. Insist on transparency and consistency in “indirect costs” (i.e., overhead) for grants to drain the bureaucratic swamp (more on that below). Resist the casualization of labor at universities, and do so at every level. Unionize every employee at every American university. Aggressively lobby Democrat presidential candidates to agree to appoint the National Labor Relations Board who will continue to recognize graduate students’ right to unionize.

Administration & bureaucracy

Industry has bureaucratic hurdles, of course, but they’re in no way comparable to the profound dysfunction taken for granted in the academic bureaucracy. If you or anyone you love has ever written a scientific grant, you know what I mean; if not, find a colleague who has and politely ask them to tell you their story. At the same time American universities are cutting their labor costs through casualization, they are massively increasing their administrative costs. You will not be surprised to find that this does not produce better scientific outcomes, or make it easier to submit a grant. This is a case of what Noam Chomsky has described as the “neoliberal confidence trick”. It goes a little something like this:

  1. Appoint/anoint all-powerful administrators/bureaucrats, selecting for maximal incompetence.
  2. Permit them to fail.
  3. Either GOTO #1, or use this to justify cutting investment in whatever was being administered in the first place.

I do not see any way out of this situation except class consciousness and labor organizing. Academic researchers must start seeing the administration as potentially hostile to their interests, and refuse to identify with, or (or quelle horreur, to join) the managerial classes.

Computing power & data

The big companies have more computers than universities. But in my area, speech and language technology, nearly everything worth doing can still be done with a commodity cluster (like you’d find in the average American CS departments) or a powerful desktop with a big GPU. And of those, the majority can still be done on a cheap laptop. (Unless, of course, you’re one of those deep learning eliminationist true believers, in which case, reconsider.) Quite a bit of great speech & language research—in particular, work on machine translation—has come from collaborations between the Giant Death Machines funding agencies (like DARPA) and academics, with the former usually footing the bill for computing and data (usually bought from the Linguistic Data Consortium (LDC), itself essentially a collaboration between the military-industrial complex and the Ivy League). In speech recognition, there are hundreds of hours of transcribed speech in the public domain, and hundreds more can be obtained with a LDC contract paid for by your funders. In natural language processing, it is by now almost gauche for published research to make use of proprietary data, possibly excepting the venerable Penn Treebank.

I feel the data-and-computing issue is largely a myth. I do not know where it got started, though maybe it’s this bizarre press-release-masquerading-as-an-article (and note that’s actually about leaving one megacorp for another).

Talent & culture

Movements between academy & industry have historically been cyclic. World War II and the military-industrial-consumer boom that followed siphoned off a lot of academic talent. In speech & language technologies, the Bell breakup and the resulting fragmentation of Bell Labs pushed talent back to the academy in the 1980s and 1990s; the balance began to shift back to Silicon Valley about a decade ago.

There’s something to be said for “game knows game”—i.e., the talented want to work with the talented. And there’s a more general factor—large industrial organizations engage in careful “cultural design” to keep talent happy in ways that go beyond compensation and fringe benefits. (For instance, see Fergus Henderson’s description of engineering practices at Google.) But I think it’s important to understand this as a symptom of the problem, a lagging indicator, and as part of an unpredictable cycle, not as something to optimize for.

Closing thoughts

I’m a firm believer in “you do you”. But I do have one bit of specific advice for scientists in academia: don’t pay so much damn attention to Silicon Valley. Now, if you’re training students—and you’re doing it with the full knowledge that few of them will ever be able to work in the academy, as you should—you should educate yourself and your students to prepare for this reality. Set up a little industrial advisory board, coordinate interview training, talk with hiring managers, adopt industrial engineering practices. But, do not let Silicon Valley dictate your research program. Do not let Silicon Valley tell you how many GPUs you need, or that you need GPUs at all. Do not believe the hype. Remember always that what works for a few-dozen crypto-feudo-fascisto-libertario-utopio-futurist billionaires from California may not work for you. Please, let the academy once again be a refuge from neoliberalism, capitalism, imperialism, and war. America has never needed you more than we do right now.

If you enjoyed this, you might enjoy my paper, with Richard Sproat, on an important NLP task that neural nets are really bad at.

Understanding text encoding in Python 2 and Python 3

Computers were rather late to the word processing game. The founding mothers and fathers of computing were primarily interested in numbers. This is fortunate: after all, computers only know about numbers. But as Brian Kunde explains in his brief history of word processing, word processing existed long before digital computing, and the text processing has always been something of an afterthought.

Humans think of text as consisting of ordered sequence of “characters” (an ill-defined Justice-Stewart-type concept which I won’t attempt to clarify here). To manipulate text in digital computers, we have to have a mapping between the character set (a finite list of the characters the system recognizes) and numbers. Encoding is the process of converting characters to numbers, and decoding is (naturally) the process of converting numbers to characters. Before we get to Python, a bit of history.

ASCII and Unicode

There are only a few character sets that have any relevance to life in 2014. The first is ASCII (American Standard Code for Information Interchange), which was first published in 1963. This character set consists of 128 characters intended for use by an English audience. Of these 95 are printable, meaning that they correspond to lay-human notions about characters. On a US keyboard, these are (approximately) the alphanumeric and punctuation characters that can be typed with a single keystroke, or with a single keystroke while holding down the Shift key, space, tab, the two newline characters (which you get when you type return), and a few apocrypha. The remaining 33 are non-printable “control characters”. For instance, the first character in the ASCII table is the “null byte”. This is indicated by a '' in C and other languages, but there’s no standard way to render it. Many control characters were designed for earlier, more innocent times; for instance, character #7 'a' tells the receiving device to ring a cute little bell (which were apparently attached to teletype terminals); today your computer might make a beep, or the terminal window might flicker once, but either way, nothing is printed.

Of course, this is completely inadequate for anything but English (not to mention those users of superfluous diaresis…e.g., the editors of the New Yorker, Motörhead). However, each ASCII character takes up only 7 bits, leaving room for another 128 characters (since a byte has an integer value between 0-255, inclusive), and so engineers could exploited the remaining 128 characters to write the characters from different alphabets, alphasyllabaries, or syllabaries. Of these ASCII-based character sets, the best-known are ISO/IEC 8859-1, also known as Latin-1, and Windows-1252, also known as CP-1252. Unfortunately, this created more problems than it solved. That last bit just didn’t leave enough space for the many languages which need a larger character set (Japanese kanji being an obvious example). And even when there are technically enough code points left over, engineers working in different languages didn’t see eye-to-eye about what to do with them. As a result, the state of affairs made it impossible to, for example, write in French (ISO/IEC 8859-1) about Ukrainian (ISO/IEC 8859-5, at least before the 1990 orthography reform).

Clearly, fighting over scraps isn’t going to cut it in the global village. Enter the Unicode standard and its Universal Character Set (UCS), first published in 1991. Unicode is the platonic ideal of an character encoding, abstracting away from the need to efficiently convert all characters to numbers. Each character is represented by a single code with various metadata (e.g., A is an “Uppercase Letter” from the “Latin” script). ASCII and its extensions map onto a small subset of this code.

Fortunately, not all encodings are merely shadows on the walls of a cave. The One True Encoding is UTF-8, which implements the entire UCS using an 8-bit code. There are other encodings, of course, but this one is ours, and I am not alone in feeling strongly that UTF-8 is the chosen encoding. At the risk of getting too far afield, here are two arguments for why you and everyone you know should just use UTF-8. First off, it is hardly matters much which UCS-compatible encoding we all use (the differences between them are largely arbitrary), but what does matter is that we all choose the same one. There is no general procedure for “sniffing” out the encoding of a file, and  there’s nothing preventing you from coming up with a file that’s a French cookbook in one encoding, and a top-secret message in another. This is good for steganographers, but bad for the rest of us, since so many text files lack encoding metadata. When it comes to encodings, there’s no question that UTF-8 is the most popular Unicode encoding scheme worldwide, and is on its way to becoming the de-facto standard. Secondly, ASCII is valid UTF-8, because UTF-8 and ASCII encode the ASCII characters in exactly the same way. What this means, practically speaking, is you can achieve nearly complete coverage of the world’s languages simply by assuming that all the inputs to your software are UTF-8. This is a big, big win for us all.

Decode early, encode late

A general rule of thumb for developers is “decode early” (convert inputs to their Unicode representation), “encode late” (convert back to bytestrings). The reason for this is that in nearly any programming language, Unicode strings behave the way our monkey brains expect them to, but bytestrings do not. To see why, try iterating over non-ASCII bytestring in Python (more on the syntax later).

>>> for byte in b"año":
...     print(byte)
...
a
?
?
o

There are two surprising things here: iterating over the bytestring returned more bytes then there are “characters” (goodbye, indexing), and furthermore the 2nd “character” failed to render properly. This is what happens when you let computers dictate the semantics to our monkey brains, rather than the other way around. Here’s what happens when we try the same with a Unicode string:

>>> for byte in u"año":
...     print(byte)
...
a
ñ
o

The Python 2 & 3 string models

Before you put this all into practice, it is important to note that Python 2 and Python 3 use very different string models. The familiar Python 2 str class is a bytestring. To convert it to a Unicode string, use the str.decode instance method, which returns a copy of the string as an instance of the unicode class. Similarly, you can make a str copy of a unicode instance with unicode.encode. Both of these functions take a single argument: a string (either kind!) representing the encoding.

Python 2 provides specific syntax for Unicode string literals (which you saw above): the a lower-case u prefix before the initial quotation mark (as in u"año").

When it comes to Unicode-awareness, Python 3 has totally flipped the script; in my opinion, it’s for the best. Instances of str are now Unicode strings (the u"" syntax still works, but is vacuous). The (reduced) functionality of the old-style strings is now just available for instances of the class bytes. As you might expect, you can create a bytes instance by using the encode method of a new-style str. Python 3 decodes bytestrings as soon as they are created, and (re)encodes Unicode strings only at the interfaces; in other words, it gets the “early/late” stuff right by default. Your APIs probably won’t need to change much, because Python 3 treats UTF-8 (and thus ASCII) as the default encoding, and this assumption is valid more often than not.

If for some reason, you want a bytestring literal, Python has syntax for that, too: prefix the quotation marks delimiting the string with a lower-case b (as in b"año"; see above also).

tl;dr

Strings are ordered sequences of characters. But computers only know about numbers, so they are encoded as byte arrays; there are many ways to do this, but UTF-8 is the One True Encoding. To get the strings to have the semantics you expect as a human, decode a string to Unicode as early as possible, and encode it as bytes as late as possible. You have to do this explicitly in Python 2; it happens automatically in Python 3.

Further reading

For more of the historical angle, see Joel Spolsky’s The absolute minimum every software developer absolutely, positively must know About Unicode and character sets (no excuses!).

Simpler sentence boundary detection

Consider the following sentence, from the Wall St. Journal portion of the Penn Treebank:

Rolls-Royce Motor Cars Inc. said it expects its U.S. sales to remain steady at about 1,200 cars in 1990.

This sentence contains 4 periods, but only the last denotes a sentence boundary. It’s obvious that the first one in U.S. is unambiguously part of an acronym, not a sentence boundary, and the same is true of expressions like $12.53. But the periods at the end of Inc. and U.S. could easily have been on the left edge of a sentence boundary; it just turns out they’re not. Humans can use local context to determine that neither of these are likely to be sentence boundaries; for example, the verb expect selects two arguments (an object its U.S. sales and the infinitival clause to remain steady…), neither of which would be satisfied if U.S. was sentence-final. Similarly, not all question marks or exclamation points are sentence-final (strictu sensu):

He says the big questions–“Do you really need this much money to put up these investments? Have you told investors what is happening in your sector? What about your track record?–“aren’t asked of companies coming to market.

Much of the available data for natural language processing experiment—including the enormous Gigaword corpus—does not include annotations for sentence boundaries providence annotations for sentence boundaries. In Gigaword, for example, paragraphs and articles are annotated, but paragraphs may contain internal sentence boundaries, which are not indicated in any way. In natural language processing (NLP), this task is known as sentence boundary detection (SBD). [1] SBD is one of the earliest steps in many natural language processing (NLP) pipelines, and since errors at this step are very likely to propagate, it is particularly important to just Get It Right.

An important component of this problem is the detection of abbreviations and acronyms, since a period ending an abbreviation is generally not a sentence boundary. But some abbreviations and acronyms do sometimes occur in sentence-final position (for instance, in the Wall St. Journal portion of the Penn Treebank, there are 99 sentence-final instances of U.S.). In this context, English writers generally omit one period, a sort of orthographic haplology.

NLTK provides an implementation of Punkt (Kiss & Strunk 2006), an unsupervised sentence boundary detection system; perhaps because it is easily available, it has been widely used. Unfortunately, Punkt is simply not very accurate compared to other systems currently available. Promising early work by Riley (1989) suggested a different way: a supervised classifier (in Riley’s case, a decision tree). Gillick (2009) achieved the best published numbers on the “standard split” for this task using another classifier, namely a support vector machine (SVM) with a linear kernel; Gillick’s features are derived from the words to the left and right of a period. Gillick’s code has make available under the name Splitta.

I recently attempted to construct my own SBD system, loosely inspired by Splitta, but expanding the system to handle ellipses (), question marks, exclamation points, or sentence-final punctuation marks. Since Gillick found no benefits from tweaking the hyperparameters of the SVM, I used a hyperparameter-free classifier, the averaged perceptron (Freund & Schapire 1999). After performing a stepwise feature ablation, I settled on a relatively small set of features, extracted as follows. Candidate boundaries are identified using a nasty regular expression. If the left or right contextual tokens match a regular expression for American English numbers (including prices, decimals, negatives, etc.), they are merged into a special token *NUMBER* (per Kiss & Strunk 2006); a similar approach is used to convert various types of quotation marks into *QUOTE*. The following features were then extracted:

  • the identity of the punctuation mark
  • identity of L and R (Reynar & Ratnaparkhi 1997, etc.)
  • the joint identity of both L and R (Gillick 2009)
  • does L contain a vowel? (Mikheev 2002)
  • does L contain a period? (Grefenstette 1999)
  • length of L (Riley 1989)
  • case of L and R (Riley 1989)

This 8-feature system performed exceptionally well on the “standard split”, with an accuracy of .9955, an F-score of .9971, and just 46 errors in all. This is very comparable with the results I obtained with a fork of Splitta extended to handle ellipses, question marks, etc.; this forked system produced 55 errors.

I have made my system freely available as a Python 3 module (and command-line tool) under the name DetectorMorse. Both code and dependencies are pure Python, so it can be run using pypy3, if you’re in a hurry.

Endnotes

[1] Or, sometimes, sentence boundary disambiguationsentence segmentationsentence splitting, sentence tokenization, etc.

References

Y. Freund & R.E. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning 37(3): 277-296.
D. Gillick. 2009. Sentence boundary detection and the problem with the U.S. In Proc. NAACL-HLT, pages 241-244.
G. Grefenstette. 1999. Tokenization. In H. van Halteren (ed.), Syntactic wordclass tagging, pages 117-133. Dordrecht: Kluwer.
T. Kiss & J. Strunk. 2006. Unsupervised multilingual sentence boundary detection. Computational Linguistics 32(4): 485-525.
A. Mikheev. 2002. Periods, capitalized words, etc. Computational Linguistics 28(3): 289-318.
J.C. Reynar & A. Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proc. 5th Conference on Applied Natural Language Processing, pages 16-19.
M.D. Riley. 1989. Some applications of tree-based modelling to speech and language indexing. In Proc. DARPA Speech and Natural Language Workshop, pages 339-352.