Search results

1 – 10 of over 91000
Article
Publication date: 3 April 2018

Kentaro Yamamoto and Mary Louise Lennon

Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement…

Abstract

Purpose

Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement constructs. Such data result in the loss of comparability across participating countries and, in the case of cyclical surveys, between past and present surveys. This paper aims to describe how data fabrication can be understood in the context of the complex processes involved in the collection, handling, submission and analysis of large-scale assessment data. The actors involved in those processes, and their possible motivations for data fabrication, are also elaborated.

Design/methodology/approach

Computer-based assessments produce new types of information that enable us to detect the possibility of data fabrication, and therefore the need for further investigation and analysis. The paper presents three examples that illustrate how data fabrication was identified and documented in the Programme for the International Assessment of Adult Competencies (PIAAC) and the Programme for International Student Assessment (PISA) and discusses the resulting remediation efforts.

Findings

For two countries that participated in the first round of PIAAC, the data showed a subset of interviewers who handled many more cases than others. In Case 1, the average proficiency for respondents in those interviewers’ caseloads was much higher than expected and included many duplicate response patterns. In Case 2, anomalous response patterns were identified. Case 3 presents findings based on data analyses for one PISA country, where results for human-coded responses were shown to be highly inflated compared to past results.

Originality/value

This paper shows how new sources of data, such as timing information collected in computer-based assessments, can be combined with other traditional sources to detect fabrication.

Details

Quality Assurance in Education, vol. 26 no. 2
Type: Research Article
ISSN: 0968-4883

Keywords

Book part
Publication date: 15 April 2014

Alexander W. Wiseman

The development of a knowledge society in the Arabian Gulf is a nested and contextualized process that relies upon the development of nation-specific knowledge economies and…

Abstract

The development of a knowledge society in the Arabian Gulf is a nested and contextualized process that relies upon the development of nation-specific knowledge economies and region-wide knowledge cultures. The role of internationally comparative education data and mass education systems in the Gulf as mechanisms for the development of knowledge economies, societies, and cultures are discussed and debated in relation to the unique contextual conditions countries operate within. The role of “big” data and mass education in creating expectations for achievement, accountability, and access is shown to significantly contribute to the development of knowledge societies by providing the infrastructure and capacity for sustainable change, which potentially leads to the institutionalization of knowledge acquisition, exchange, and creation in the Gulf and beyond.

Details

Education for a Knowledge Society in Arabian Gulf Countries
Type: Book
ISBN: 978-1-78350-834-1

Keywords

Article
Publication date: 26 August 2014

Oren Pizmony-Levy, James Harvey, William H. Schmidt, Richard Noonan, Laura Engel, Michael J. Feuer, Henry Braun, Carla Santorno, Iris C. Rotberg, Paul Ash, Madhabi Chatterji and Judith Torney-Purta

This paper presents a moderated discussion on popular misconceptions, benefits and limitations of International Large-Scale Assessment (ILSA) programs, clarifying how ILSA results…

Abstract

Purpose

This paper presents a moderated discussion on popular misconceptions, benefits and limitations of International Large-Scale Assessment (ILSA) programs, clarifying how ILSA results could be more appropriately interpreted and used in public policy contexts in the USA and elsewhere in the world.

Design/methodology/approach

To bring key issues, points-of-view and recommendations on the theme to light, the method used is a “moderated policy discussion”. Nine commentaries were invited to represent voices of leading ILSA scholars/researchers and measurement experts, juxtaposed against views of prominent leaders of education systems in the USA that participate in ILSA programs. The discussion is excerpted from a recent blog published by Education Week. It is moderated with introductory remarks from the guest editor and concluding recommendations from an ILSA researcher who did not participate in the original blog. References and author biographies are presented at the end of the article.

Findings

Together, the commentaries address historical, methodological, socio-political and policy issues surrounding ILSA programs vis-à-vis the major goals of education and larger societal concerns. Authors offer recommendations for improving the international studies themselves and for making reports more transparent for educators and the public to facilitate greater understanding of their purposes, meanings and policy implications.

Originality/value

When assessment policies are implemented from the top down, as is often the case with ILSA program participation, educators and leaders in school systems tend to be left out of the conversation. This article is intended to foster a productive two-way dialogue among key ILSA actors that can serve as a stepping-stone to more concerted policy actions within and across national education systems.

Details

Quality Assurance in Education, vol. 22 no. 4
Type: Research Article
ISSN: 0968-4883

Keywords

Article
Publication date: 28 January 2014

Meiko Lin, Erin Bumgarner and Madhabi Chatterji

This policy brief, the third in the AERI-NEPC eBrief series “Understanding validity issues around the world”, discusses validity issues surrounding International Large Scale

Abstract

Purpose

This policy brief, the third in the AERI-NEPC eBrief series “Understanding validity issues around the world”, discusses validity issues surrounding International Large Scale Assessment (ILSA) programs. ILSA programs, such as the well-known Programme of International Student Assessment (PISA) and the Trends in International Mathematics and Science Study (TIMSS), are rapidly expanding around the world today. In this eBrief, the authors examine what “validity” means when applied to published results and reports of programs like the PISA.

Design/methodology/approach

This policy brief is based on a synthesis of conference proceedings and review of selected pieces of extant literature. It begins by summarizing perspectives of an invited expert panel on the topic. To that synthesis, the authors add their own analysis of key issues. They conclude by offering recommendations for test developers and test users.

Findings

ILSA programs and tests, while offering valuable information, should be read and used cautiously and in context. All parties need to be on the same page to maximize valid use of ILSA results, to obtain the greatest educational and social benefits, and to minimize negative consequences. The authors propose several recommendations for test makers and ILSA program leaders, and ILSA users. To ILSA leaders and researchers: provide more cautionary information about how to correctly interpret the ILSA results, particularly country rankings, given contextual differences among nations. Provide continuing psychometric or research resources so as to address or reduce various sources of error in reports. Encourage policy makers in different nations to share the responsibility for ensuring more contextualized (and valid) interpretations of ILSA reports and subsequent policy development. Raise awareness among policy makers to look beyond simple rankings and pay more attention to inter-country differences. For consumers of ILSA results and reports: read the fine print, not just the country rankings, to interpret ILSA results correctly in particular regions/nations. When looking to high-ranking countries as role models, be sure to consider the “whole picture”. Use ILSA data as complements to other national- and state-level educational assessments to better gauge the status of the country's education system and subsequent policy directions.

Originality/value

By translating complex information on validity issues with all concerned ILSA stakeholders in mind, this policy brief will improve uses and applications of ILSA information in national and regional policy contexts.

Details

Quality Assurance in Education, vol. 22 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Book part
Publication date: 3 July 2018

Alexander W. Wiseman and Petrina M. Davidson

The shift from data-informed to data-driven educational policymaking is conceptually framed by institutional and transhumanist perspectives. Examples of the shift to large-scale

Abstract

The shift from data-informed to data-driven educational policymaking is conceptually framed by institutional and transhumanist perspectives. Examples of the shift to large-scale quantitative data driving educational decision-making suggest that data-driven educational policy will not adjust for context to the degree as done by the data-informed or data-based policymaking. Instead, the algorithmization of educational decision-making is both increasingly realizable and necessary in light of the overwhelmingly big data on education produced annually around the world. Evidence suggests that the isomorphic shift from localized data and individual decision-making about education to large-scale assessment data has changed the nature of educational decision-making and national educational policy. Big data are increasingly legitimized in educational policy communities at national and international levels, which means that algorithms are assumed to be the best way to analyze and make decisions about large volumes of complex data. There is a conceptual concern, however, that decontextualized or de-humanized educational policies may have the effect of increasing student achievement, but not necessarily the translation of knowledge into economically, socially, or politically productive behavior.

Details

Cross-nationally Comparative, Evidence-based Educational Policymaking and Reform
Type: Book
ISBN: 978-1-78743-767-8

Keywords

Content available
Article
Publication date: 3 April 2018

Irwin S. Kirsch, William Thorn and Matthias von Davier

423

Abstract

Details

Quality Assurance in Education, vol. 26 no. 2
Type: Research Article
ISSN: 0968-4883

Article
Publication date: 11 October 2020

Tessa Withorn, Joanna Messer Kimmitt, Carolyn Caffrey, Anthony Andora, Cristina Springfield, Dana Ospina, Maggie Clarke, George Martinez, Amalia Castañeda, Aric Haas and Wendolyn Vermeer

This paper aims to present recently published resources on library instruction and information literacy, providing an introductory overview and a selected annotated bibliography…

8467

Abstract

Purpose

This paper aims to present recently published resources on library instruction and information literacy, providing an introductory overview and a selected annotated bibliography of publications covering various library types, study populations and research contexts.

Design/methodology/approach

This paper introduces and annotates English-language periodical articles, monographs, dissertations, reports and other materials on library instruction and information literacy published in 2019.

Findings

The paper provides a brief description of all 370 sources and highlights sources that contain unique or significant scholarly contributions.

Originality/value

The information may be used by librarians, researchers and anyone interested as a quick and comprehensive reference to literature on library instruction and information literacy.

Details

Reference Services Review, vol. 48 no. 4
Type: Research Article
ISSN: 0090-7324

Keywords

Book part
Publication date: 17 June 2020

Florin D. Salajan and Tavis D. Jules

Over the past few years, assemblage theory or assemblage thinking has garnered increasing attention in educational research, but has been used only tangentially in explications of…

Abstract

Over the past few years, assemblage theory or assemblage thinking has garnered increasing attention in educational research, but has been used only tangentially in explications of the nature of comparative and international education (CIE) as a field. This conceptual examination applies an assemblage theory lens to explore the contours of CIE as a scholarly field marked by its rich and interweaved architecture. It does so by first reviewing Deleuze and Guattari’s (1987) principles of rhizomatic structures to define the emergence of assemblages. Secondly, it transposes these principles in conceiving the field of CIE as a meta-assemblage of associated and subordinated sub-assemblages of actors driven by varied disciplinary, interdisciplinary or multidisciplinary interests. Finally, it interrogates the role of Big Data technologies in exerting (re)territorializing and deterritorializing tendencies on the (re)configuration of CIE. The chapter concludes with reiterating the variable character of CIE as a meta-assemblage and proposes ways to move this conversation forward.

Details

Annual Review of Comparative and International Education 2019
Type: Book
ISBN: 978-1-83867-724-4

Keywords

Book part
Publication date: 22 August 2014

Deepa Srikantaiah and Wendi Ralaingita

The Global Mathematics Education Special Interest Group (SIG) of the Comparative and International Education Society (CIES) provides a forum for researchers and practitioners from…

Abstract

The Global Mathematics Education Special Interest Group (SIG) of the Comparative and International Education Society (CIES) provides a forum for researchers and practitioners from around the world to discuss theory, practices, and techniques for mathematics learning from early childhood to tertiary education. Teacher education and professional development is a significant focus of the SIG’s conversations. This chapter discusses and current and future impact of CIE research on teacher education and professional development in global mathematics. A range of factors can undermine students’ performance in mathematics. In many contexts, teacher shortages result in underqualified teachers; teachers trained in other subjects are assigned to teach mathematics; or teacher training lacks adequate focus on teaching mathematics for understanding. While these factors exist in many contexts, they are most acute in low-income countries and communities. Mathematics is widely recognized as a mechanism for economic growth, at individual and system levels. However, low-income countries and marginalized populations perform poorly in cross-national assessments. As a result, lower-performing countries may emulate policy and practice of the higher-performing countries. In such cases there is a risk of superficial “fixes” that ignore contextual factors. There are ways to reduce such risks by combining such assessments with more contextual studies, or by using cross-national assessments as catalysts for examining what is happening locally. Looking forward, there is reason for optimism about the recognition of the importance of early-grade numeracy; recognition of the intersections of mathematics, culture, and language; and potential for reaching across CIE areas and methodologies to develop a more measured and nuanced view of assessment results.

Details

Annual Review of Comparative and International Education 2014
Type: Book
ISBN: 978-1-78350-453-4

Keywords

Article
Publication date: 3 April 2018

Matthias von Davier

Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or…

Abstract

Purpose

Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or interruptions of testing sessions, as well as fatigue or lack of motivation to succeed. This paper aims to provide a review of statistical tools based on latent variable modeling approaches extended by explanatory variables that allow detection of survey errors in skill surveys.

Design/methodology/approach

This paper reviews psychometric methods for detecting sources of error in cognitive assessments and questionnaires. Aside from traditional item responses, new sources of data in computer-based assessment are available – timing data from the Programme for the International Assessment of Adult Competencies (PIAAC) and data from questionnaires – to help detect survey errors.

Findings

Some unexpected results are reported. Respondents who tend to use response sets have lower expected values on PIAAC literacy scales, even after controlling for scores on the skill-use scale that was used to derive the response tendency.

Originality/value

The use of new sources of data, such as timing and log-file or process data information, provides new avenues to detect response errors. It demonstrates that large data collections need to better utilize available information and that integration of assessment, modeling and substantive theory needs to be taken more seriously.

Details

Quality Assurance in Education, vol. 26 no. 2
Type: Research Article
ISSN: 0968-4883

Keywords

1 – 10 of over 91000