A recent piece in the Boston Globe quoted my take on a grant to Providence, RI for a “word-gap” intervention. In this quote, I expressed some skepticism about the grant’s goals, but omitted the part of the email where I explained why I felt that way. Readers of the piece might have gotten the impression that I had a less, uhm, nuanced take on the Providence grant than I do. So, here is a summary of my full email to Ben from which the quote was taken.
An ambitious proposal
First off, the Providence/LENA team should be congratulated on this successful grant application: I’m glad they got it and not something more “Bloombergian” (like, say, an experimental proposal to ban free-pizzas-with-beer deals in the interest of bulging hipster waistlines). And they deserve respect for getting approved for such an ambitious proposal: the cash involved is an order of magnitude larger than the average applied linguistics grants. And, perhaps most of all, I have a great deal of respect for any linguist who can convince a group of non-experts that, not only is their work important, but that it is worth the opportunity cost. I also note that if materials from the Providence study are made publicly available (and they should be, in a suitably de-identified format, for the sake of the progress of the human race), my own research stands to benefit from this grant.
But there is another sense in which the proposal is ambitious, however: the success of this intervention depends on a long chain of inferences. If any one of these is wrong, the intervention is unlikely to succeed. Here are what I see as the major assumptions under which the intervention is being funded.
Assumption I: There exists a “word gap” in lower-income children
I was initially skeptical of this claim because it is so similar to a discredited assumption of 20th century educational theorists: the assumption that differences in school and standardized test performance were the result of the “linguistically impoverished” environment in which lower class (and especially minority) speakers grew up.
This strikes me as quite silly: no one who has even a tenuous acquintance with African-American communities could fail to note the importance of verbal skills in said community. Every African-American stereotype I can think of has one thing in common: an emphasis on verbal abilities. Here’s what Bill Labov, founder of sociolinguistics, had to say in his 1972 book, Language in the Inner City:
Black children from the ghetto area are said to receive little verbal stimulation, to hear very little well-formed language, and as a result are impoverished in their means of verbal expression…Unfortunately, these notions are based upon the work of educational psychologists who know very little about language and even less about black children. The concept of verbal deprivation has no basis in social reality. In fact, black children in the urban ghettos receive a great deal of verbal stimulation…and participate fully in a highly verbal culture. (p. 201)
I suspect that Labov may have dismissed the possibility of input deficits prematurely, just as I did. After all, it is an empirical hypothesis, and while Betty Hart and Todd Risley’s original study on differences in lexical input involved a small and perhaps-atypical sample, but the correlation between socioeconomic status and lexical input has been many times replicated. So, there may be something to the “impoverishment theory” after all.
Assumption II: LENA can really estimate input frequency
Can we really count words using current speech technology? In a recent Language Log post, Mark Liberman speculated that counting words might be beyond the state of the art. While I have been unable to find much information on the researchers behind the grant or behind LENA, I don’t see any reason to doubt that the LENA Foundation has in fact built a useful state-of-the-art speech system that allows them to estimate input frequencies with great precision. One thing that gives me hope is that a technical report by LENA researchers provides estimates average input frequency in English which are quite close to an estimate computed by developmentalist Dan Swingley (in a peer-reviewed journal) using entirely different methods.
Assumption III: The “word gap” can be solved by intervention
For children who are identified as “at risk”, the Providence intervention offers the following:
Families participating in Providence Talks would receive these data during a monthly coaching visit along with targeted coaching and information on existing community resources like read-aloud programs at neighborhood libraries or special events at local children’s museums.
Will this have an long-term effect? I simply don’t know of any work looking into this (though please comment if you’re aware of something relevant), so this too is a strong assumption.
Given that there is now money in the budget for coaching, why are LENA devices necessary? Would it be better if any concerned parent could get coaching?
And, finally, do the caretakers of the most at-risk children really have time to give to this intervention? I believe the most obvious explanation of the correlation between verbal input and socioeconomic status is that caretakers on the lower end of the socioeconomic scale have less time to give to their children’s education: this follows from the observation that child care quality is a strong predictor of cognitive abilities. If this is the case, then simply offering counseling will do little to eliminate the word gap, since the families most at risk are the least able to take advantage of the intervention.
Assumption IV: The “word gap” has serious life consequences
Lexical input is clearly important for language development: it is, in some sense, the sole factor determining whether a typically developing child acquires English or Yawelmani. And, we know the devastating consequences of impoverished lexical input.
But here we are at risk of falling for the all-too-common fallacy which equates predictors of variance within clinical and subclinical populations. While massively impoverished language input gives rise to clinical language deficits, it does not follow that differences in language skills within typically developing children can be eliminated by leveling the language input playing field.
Word knowledge (as measured by verbal IQ, for instance) is correlated with many other measures of language attainment, but are increases in language skills enough to help an at-risk child to escape the ghetto (so to speak)?
This is the most ambitious assumption of the Providence intervention. Because there is such a strongĀ correlation between lexical input and social class, it is very difficult to control for this while manipuating lexical input (and doing so would presumably be wildly unethical), we know very little on this subject. I hope that the Providence study will shed some light on this question.
So what’s wrong with more words?
This is exactly what my mom wanted to know when I sent her a link to the Globe piece. She wanted to emphasize that I only got the highest-quality word-frequency distributions all throughout my critical period! I support, tentatively, the Providence initiative and wish them the best of luck; if these assumptions all turn out to be true, the organizers and scientists behind the grant will be real heroes to me.
But, that leads me to the only negative effect this intervention could have: if closing the word gap does little to influence long-term educational outcomes, it will have made concerned parents unduly anxious about the environment they provide for their children. And that just ain’t right.
(Disclaimer: I work for OHSU, where I’m supported by grants, but these are my professional and personal opinions, not those of my employer or funding agencies. That should be obvious, but you never know.)
I recently spoke to NPR’s All Things Considered about this: http://www.npr.org/2014/03/17/289799002/efforts-to-close-the-achievement-gap-in-kids-start-at-home