To read this content please select one of the options below:

How do the kids speak? Improving educational use of text mining with child-directed language models

Peter Organisciak (Department of Research Methods and Information Science, University of Denver, Denver, Colorado, USA)
Michele Newman (Information School, University of Washington, Seattle, Washington, USA)
David Eby (School of Information Sciences, University of Illinois at Urbana-Champaign, Champaign, Illinois, USA)
Selcuk Acar (Department of Educational Psychology, University of North Texas, Denton, Texas, USA)
Denis Dumas (Department of Educational Psychology, University of Georgia, Athens, Georgia, USA)

Information and Learning Sciences

ISSN: 2398-5348

Article publication date: 19 January 2023

Issue publication date: 28 February 2023




Most educational assessments tend to be constructed in a close-ended format, which is easier to score consistently and more affordable. However, recent work has leveraged computation text methods from the information sciences to make open-ended measurement more effective and reliable for older students. The purpose of this study is to determine whether models used by computational text mining applications need to be adapted when used with samples of elementary-aged children.


This study introduces domain-adapted semantic models for child-specific text analysis, to allow better elementary-aged educational assessment. A corpus compiled from a multimodal mix of spoken and written child-directed sources is presented, used to train a children’s language model and evaluated against standard non-age-specific semantic models.


Child-oriented language is found to differ in vocabulary and word sense use from general English, while exhibiting lower gender and race biases. The model is evaluated in an educational application of divergent thinking measurement and shown to improve on generalized English models.

Research limitations/implications

The findings demonstrate the need for age-specific language models in the growing domain of automated divergent thinking and strongly encourage the same for other educational uses of computation text analysis by showing a measurable difference in the language of children.

Social implications

Understanding children’s language more representatively in automated educational assessment allows for more fair and equitable testing. Furthermore, child-specific language models have fewer gender and race biases.


Research in computational measurement of open-ended responses has thus far used models of language trained on general English sources or domain-specific sources such as textbooks. To the best of the authors’ knowledge, this paper is the first to study age-specific language models for educational assessment. In addition, while there have been several targeted, high-quality corpora of child-created or child-directed speech, the corpus presented here is the first developed with the breadth and scale required for large-scale text modeling.



The authors thank Kelly Berthiaume, Maggie Ryan and the full MOTES team for additional contributions and advice.

Funding: This study was funded by the Institute of Education Sciences (IES) (Grant No. R305A200519).

Research data: The MOTES Corpus model as well as code for reproducing data collection and modeling is available at


Organisciak, P., Newman, M., Eby, D., Acar, S. and Dumas, D. (2023), "How do the kids speak? Improving educational use of text mining with child-directed language models", Information and Learning Sciences, Vol. 124 No. 1/2, pp. 25-47.



Emerald Publishing Limited

Copyright © 2022, Emerald Publishing Limited

Related articles