Indic is an adjective referring to the Indo-Aryan languages such as Hindi-Urdu or Bengali. These languages are spoken mostly in the northern parts of India, as well as in Bangladesh, Pakistan, Sri Lanka, Nepal, and the Maldives. This term can be confusing, because hundreds of millions of people in the Indian subcontinent (and nearby island nations) speak non-Indic first languages: over 250 million people, particularly in the south of India and the north of Sri Lanka, speak Dravidian languages, which include Malayalam, Tamil, and Telugu. Austronesian, Tibeto-Burman, and Tai-Kadai languages, and many language isolates, are also spoken in the India and the other nations of subcontinent, as is English (and French, and Portuguese). Unfortunately, there is now a trend to use Indic to mean ‘languages of the subcontinent’. See here for a prominent example. This is a new sense for Indic, and while there is probably a need for such a lexeme to express the notion (language of India or subcontinental language would work), reusing Indic, which already has a distinct and well-established sense, just adds unnecessary confusion.
Category: Sociolinguistics
A minor syntactic innovation in English: “BE crazy”
I recently became aware of an English syntactic construction I hadn’t noticed before. It involves the predicate BE crazy, which itself is nothing new, but here the subject of that predicate is, essentially, quoted speech from a second party. I myself am apparently a user of this variant. For example, a friend told me of someone who describes themselves (on an online dating platform) as someone who …likes travel and darts, and I responded, simply, Likes darts is crazy. That is to say, I am making some kind of assertion that the description “likes darts”, or perhaps the speech act of describing oneself as such, is itself a bit odd. Now in this case, the subject is simply the quotation (with the travel and part elided), and while this forms a constituent, a tensed VP, we don’t normally accept them as the subject of predicates. And I suspect constituenthood is not even required. So this is distinct from the ordinary use of BE crazy with a nominal subject.
I suspect, though I do not have the means to prove, this is a relatively recent innovation; I hear it from my peers (i.e., those of similar age, not my colleagues at work, who may be older) and students, but not often elsewhere. I also initially thought it might be associated with the Mid-Atlantic but I am no longer so sure.
Your thoughts are welcome.
Optionality as acquirendum
A lot of work deals with the question of acquiring “optional” or “variable” grammatical rules, and my impression is that different communities are mostly talking at cross-purposes. I discern at least three ways linguists conceive of optionality as something which the child must acquire.
- Some linguists assume—I think without much evidence—that optionality is mere “free variation”, so that the learner simply needs to infer which rules bear a binary [optional] feature. This is an old idea, going back to at least Dell (1981); Rasin et al. (2021:35) explicitly state the problem in this form.
- Variationist sociolinguists focus on the differential rates at which grammatical rules apply. They generally recognize the acquirenda as essentially conditional probability distributions which give the probability of rule application in a given grammatical context. Bill Labov is a clear avatar of this strain of thinking (e.g., Labov 1989). David Adger and colleagues have attempted to situate this within modern syntactic frameworks (e.g., Adger 2006).
- Some linguists believe that optionality is not statable within a single grammar, and must reflect the competing grammars. The major proponent of this approach is Anthony Kroch (e.g., Kroch 1989). While this conception might license some degree of “nihilism” about optionality, it also has led to some interesting work which hypothesizes interesting substantive constraints on grammar-internal constraints on variation as in the work of Laurel MacKenzie and colleagues (e.g., MacKenzie 2019). This work is also very good at ridding the (2) of some of its unfortunate “externalist” thinking.
I have to reject (1) as overly simplicistic. I find (2) and (3) both compelling in some way but a lot of work remains to synthesize or adjudicate between them.
References
Adger, D. 2006. Combinatorial variability. Journal of Linguistics 42(3): 503-530.
Dell, F. 1981. On the learnability of optional phonological rules. Linguistic Inquiry 12(1): 31-37.
Kroch, A. 1989. Reflexes of grammar in patterns of language change. Language Variation & Change 1(1): 199-244.
Labov, W. 1989. The child as linguistic historian. Language Variation & Change 1(1): 85-97.
MacKenzie, L. 2019. Perturbing the community grammar: Individual differences and community-level constraints on sociolinguistic variation. Glossa 4(1): 28.
Rasin, E., Berger, I., Lan, R., Shefi, I., and Katzir, R. 2021. Approaching explanatory adequacy in phonology using Minimum Description Length. Journal of Language Modelling 9(1): 17-66.
The different functions of probabilty in probabilistic grammar
I have long been critical of naïve interpretations of probabilistic grammar. To me, it seems like the major motivation for this approach derives from a naïve—I’d say overly naïve—linking hypothesis mapping between acceptability judgments and grammaticality, as seen in Likert scale-style acceptability tasks. (See chapter 2 of my dissertation for a concrete argument against this.) But in this approach, the probabilities are measures of wellformedness.
It occurs to me that there are a number of ontologically distinct interpretations of grammatical probabilities of the sort produced by “maxent”, i.e., logistic regression models.
For instance, at M100 this weekend, I heard Bruce Hayes talk about another use of maximum entropy models: scansion. In poetic meters, there is variation in, say, whether the caesura is masculine (after a stressed syllable) or feminine (after an unstressed syllable), and the probabilities reflect that.1 However, I don’t think it makes sense to equate this with grammaticality, since we are talking about variation in highly self-conscious linguistic artifacts here and there is no reason to think one style of caesura is more grammatical than the other.2
And of course there is a third interpretation, in which the probabilities are production probabilities, representing actual variation in production, within a speaker or across multiple speakers.
It is not obvious to me that these facts all ought to be modeled the same way, yet the maxent community seems comfortable assuming a single cognitive model to cover all three scenarios. To state the obvious, it makes no sense for a cognitive model to acconut for interspeaker variation because there is no such thing as “interspeaker cognition”, there are just individual mental grammars.
Endnotes
- This is a fabricated example because Hayes and colleagues mostly study English meter—something I know nothing about—whereas I’m interested in Latin poetry. I imagine English poetry has caesurae too but I’ve given it no thought yet.
- I am not trying to say that we can’t study grammar with poetry. Separately, I note, as did, I think, Paul Kiparsky at the talk, that this model also assumes that the input text the poet is trying to fit to the meter has no role to play in constraining what happens.
Myths about writing systems
In collaboration with Richard Sproat, I just published a short position paper on “myths about writing systems” in NLP to appear in the proceedings for CAWL, the ACL Workshop on Computation and Writing Systems. I think it will be most of all useful to reviewers and editors who need a resource to combat nonsense like Persian is a right-to-left language and want to suggest a correction. Take a look here.
Linguistics and prosociality
It is commonly said that linguistics as a discipline has enormous prosocial potential. What I actually suspect is that this potential is smaller than some linguists imagine. Linguistics is of course essential to the deep question of “what is human nature”, but we are up against our own epistemic bounds in answering these questions and the social impact of answering this question is not at all clear to me. Linguistics is also essential to the design of speech and language processing technologies (despite what you may have heard: don’t believe the hype), and while I find these technologies exciting, it remains to be seen whether they will be as societically transformative as investors think. And language documentation is transformative to some of society’s most marginalized. But I am generally skeptical of linguistics’ and linguists’ ability to combat societal biases more generally. While I don’t think any member of society should be considered well-educated until they’ve thought about the logical problems of language acquisition, considered the idea of language as something that exists in the mind rather than just in the ether, or confronted standard language ideologies, I have to question whether the broader discipline has been very effective here getting these messages out.
Noam and Bill are friends
One of the more confusing slanders against generativism is the belief that it has all somehow been undone by William Labov and the tradition of variationist sociolinguistics. I have bad news: Noam and Bill are friends. I saw them chopping it up once, in Philadelphia, and I have to assume they were making fun of functionalists. Bill has nice things to say about the generativist program in his classic paper on negative concord; Noam has some interesting comments about how the acquirenda probably involve multiple competing grammars in that Piaget lecture book. They both think functionalism is wildly overrated. And of course, the i-language perspective that Noam brings is an absolute essential to dialogues about language ideologies, language change, stigma and stratification, and so forth that we associate with Bill.
Neurolinguistic deprogramming
I venture to say most working linguists would reject—outright—strong versions of linguistic relativity and the Sapir-Whorf hypothesis, and would regard neuro-linguistic programming as pseudoscientific rubbish. This is of course in contrast to the general public: even the highly-educated take linguistic relativity as an obvious description of human life. Yet, it is not uncommon for the same linguists to endorse beliefs in the power of renaming that is hard to reconcile with the general disrepute of the vulgar Whorfian view the power of renaming assumes.
For instance, George Lakoff’s work on “framing” in politics argued that renaming social programs was the one weird trick needed to get Howard Dean into the White House. While this seems quaint in retrospect, his proposal was widely debated at the time. Pinker’s (sigh) takedown is necessary reading. The problem, of course, is that Lakoff ought to have provided, and ought to have been expected to provide, any evidence at all for a view of language widely regarded as untutored by his colleagues.
The case of renaming languages is a grayer one. I believe that one ought to call people what they want to be called, and that if stakeholders would prefer their language to be referred to as Tohono Oʼodham rather than Pápago, I am and will remain happy to oblige.1 If African American Vernacular English is renamed to African American Language (as seems to be increasing common in scholarship), I will gladly follow suit. But I can’t imagine how it could be the case that the renaming represents a reconceptualization of either the language itself, or a change in how we study it. Indeed, it would be strange for the name of any language to reflect any interesting property of said language. French by any other name would still have V-to-T movement and liaison.
It may be that these acts of renaming have power. Indeed, I hope they do. But I have to suspect the opposite: they’re the sort of fiddling one does when one is out power, when one is struggling to believe that a better world is possible. And if I’m wrong, who is better suited to show that than the trained linguist?
Endnotes
- Supposedly, the older name of the language comes from a pejorative used by a neighboring tribe, the Pima. Ba꞉bawĭkoʼa means, roughly ‘tepary bean eater’. The Spanish colonizers adapted this as Pápago. I feel like the gloss sounds like a cutting insult in English too, so I get why this exonym has fallen in disrepute.
e- and i-France
It will probably not surprise the reader to see me claim that France and French are both sociopolitical abstractions. France is, like all states, an abstraction, and it is hard to point to physical manifestations of France the state. But we understand that states are a bundle of related institutions with (mostly) shared goals. These institutions give rise to our impression of the Fifth Republic, though at other times in history conflict between these institutions gave rise to revolution. But currently the defining institutions share a sufficient alignment that we can usefully talk as if they are one. This is not so different from the i-language perspective on languages. Each individual “French” speaker has a grammar projected by their brain, and these are (generally speaking) sufficiently similar that we can maintain the fiction that they are the same. The only difference I see is that linguists can give a rather explicit account of any given instance of i-French whereas it’s difficult to describe political institutions in similarly detailed terms (though this may just reflect my own ignorance about modern political science). In some sense, this explicitness at the i-language level makes e-French seem even more artificial than e-France.
Stop being weird about the Russian language
As you know, Russia is waging an unprovoked war on Ukraine. It should go without saying that my sympathies are with Ukraine, but of course both states are undemocratic, one-party kleptocracies and I have little hope for anything good coming from the conflict.
That’s all besides the point. Since the start of the war, I have had several conversations with linguists who suggested that the study of the Russian language—one of the most important languages in linguistic theorizing over the years—is now “cringe”. This is nonsense. First, official statistics show that a majority of Ukrainian citizens identify as ethnically Russian, and that a substantial minority speak Russian as a first language (and this is probably skewed by social-desirability bias). Secondly, it is wrong to identify a language with any one nation. (It is “cringe” to use flag emojis to label languages; just use the ISO codes.) Third, it is foolish to equate the state with the people who live underneath them, particularly after the end of the kind of mass political movements that in earlier times could stop this kind of state violence. It is a basic corollary of the i-language view that children learn whatever languages they’re sufficiently exposed to, regardless of their location or of their caretakers’ politics. The iniquity of war does not travel from nation to language to its speakers. Stop being weird about it.