This study, by critically analyzing material from multiple sources, aims to provide an overview of what is available on evaluation tools for educational apps for children. To realize this objective, a systematic literature review was conducted to search all English literature published after January 2010 in multiple electronic databases and internet sources. Various combinations of search strings were used due to database construction differences, while the results were cross-referenced to discard repeated references, obtaining those that met the criteria for inclusion.
The present study was conducted according to the methods provided by Khan et al. (2003) and Thomé et al. (2016). The whole procedure included four stages: planning the review, identifying relevant studies in the literature, critical analysis of the literature, summarizing and interpreting the findings (Figure 1). Furthermore, in this analysis, a well-known checklist, PRISMA, was also used as a recommendation (Moher et al., 2015).
These review results reveal that, although there are several evaluation tools, in their majority they are not considered adequate to help teachers and parents to evaluate the pedagogical affordances of educational apps correctly and easily. Indeed, most of these tools are considered outdated. With the emergence of new issues such as General Data Protection Regulation, the quality criteria and methods for assessing children's products need to be continuously updated and adapted (Stoyanov et al., 2015). Some of these tools might be considered as good beginnings, but their “limited dimensions make generalizable considerations about the worth of apps” (Cherner, Dix and Lee, 2014, p. 179). Thus, there is a strong need for effective evaluation tools to help parents and teachers when choosing educational apps (Callaghan and Reich, 2018).
Even though this work is performed by following the systematic mapping guideline, threats to the validity of the results presented still exist. Although custom strings that contained a rich collection of data were used to search for papers, potentially relevant publications that would have been missed by the advanced search might exist. It is recommended that at least two different reviewers should independently review titles, abstracts and later full papers for exclusion (Thomé et al., 2016). In this study, only one reviewer – the author – selected the papers and did the review. In the case of a single researcher, Kitchenham (2004) recommends that the single reviewer should consider discussing included and excluded papers with an expert panel. The researcher, following this recommendation, discussed the inclusion and exclusion procedure with an expert panel of two professionals with research experience from the Department of (removed for blind review). To deal with publication bias, the researcher in conjunction with the expert panel used the search strategies identified by Kitchenham (2004) including: Grey literature, conference proceedings, communicating with experts working in the field for any unpublished literature.
The purpose of this study was not to advocate any evaluation tool. Instead, the study aims to make parents, educators and software developers aware of the various evaluation tools available and to focus on their strengths, weaknesses and credibility. This study also highlights the need for a standardized app evaluation (Green et al., 2014) via reliable tools, which will allow anyone interested to evaluate apps with relative ease (Lubniewski et al., 2018). Parents and educators need a reliable, fast and easy-to-use tool for the evaluation of educational apps that is more than a general guideline (Lee and Kim, 2015). A new generation of evaluation tools would also be used as a reference among the software developers, designers to create educational apps with real educational value.
The results of this study point to the necessity of creating new evaluation tools based on research, either in the form of rubrics or checklists to help educators and parents to choose apps with real educational value.
However, to date, no systematic review has been published summarizing the available app evaluation tools. This study, by critically analyzing material from multiple sources, aims to provide an overview of what is available on evaluation tools for educational apps for children.
The author would like to thank his colleagues for their help during this paper.
Funding: The present study was funded by a grant from the Special Account for Research Funds of University of Crete (SARF UoC).
Disclosure statement: No competing financial interests exist.
Data availability: data generated or analyzed during this review are included within the manuscript and additional files.
Papadakis, S. (2021), "Tools for evaluating educational apps for young children: a systematic review of the literature", Interactive Technology and Smart Education, Vol. 18 No. 1, pp. 18-49. https://doi.org/10.1108/ITSE-08-2020-0127
Emerald Publishing Limited
Copyright © 2020, Emerald Publishing Limited