Search results
1 – 10 of 107Thomas Salzberger, Hartmut H. Holzmüller and Anne Souchon
Measures are comparable if and only if measurement equivalence has been demonstrated. Although comparability and equivalence of measures are sometimes used interchangeably, we…
Abstract
Measures are comparable if and only if measurement equivalence has been demonstrated. Although comparability and equivalence of measures are sometimes used interchangeably, we advocate a subtle but important difference in meaning. Comparability implies that measures from one group can be compared with measures from another group. It is a property of the measures, which is given or not. In particular, comparability presumes valid measures within each group compared. Measurement equivalence, by contrast, refers to the way measures are derived and estimated. It is intrinsically tied to the underlying theory of measurement. Thus, measurement equivalence cannot be dealt with in isolation. Its assessment has to be incorporated into the theoretical framework of measurement. Measurement equivalence is closely connected to construct validity for it refers to the way manifest indicators are related to the latent variable, within a particular culture and across different cultures. From this it follows that equivalence cannot, or should not, be treated as a separate issue but as a constitutive element of validity. A discussion of measurement equivalence without addressing validity would be incomplete.
David V. Day and Matthew F. Barney
This chapter presents Infosys’ approach to leader development that includes the practical benefits of psychometric and statistical methods commonly used by other disciplines, such…
Abstract
This chapter presents Infosys’ approach to leader development that includes the practical benefits of psychometric and statistical methods commonly used by other disciplines, such as Rasch measurement and latent growth modeling. Infosys is beginning to use these with other individualized leader development practices such as coaching, intervention bundling, and evaluation. When combined, these elements have the potential to personalize developmental processes to each leader and improve microlevel leadership theory with the overarching purpose of enhancing global leadership at Infosys and promoting the science of individual leader development.
Allaa Barefah, Elspeth McKay and Sulaiman Alqahtani
There is continual evidence of ineffective e-Learning programmes that are set amid emerging information and communication technology (ICT) tools by higher education (HE…
Abstract
There is continual evidence of ineffective e-Learning programmes that are set amid emerging information and communication technology (ICT) tools by higher education (HE) providers. While many of the existing accounts outline the potential of integrating such educational technology into their teaching and learning practice, other studies point out the adoption challenges of such programmes. This chapter tackles this dilemma in two respects. Firstly, through an examination of the limitations surrounding the instructional systems design (ISD) models while urging the need for empirical evidence and ratification processes to substantiate these models as they relate to online instructional environments. Secondly, through the investigation of the effectiveness offered by ICT tools under different instructional environments in order to facilitate the effective application of e-Learning. Field evaluation in the form of a series of 2×3 factorial quasi-experiments was conducted at four higher education institutions in Saudi Arabia. The empirical results confirm the validity of the ISD model and reliably captured its effects in improving learners’ performance under three instructional delivery modes. The empirical evidence reveals the extent of effectiveness of the proposed prescriptive ISD model enabling an improved design of ICT-based HE instructional strategies. On a managerial level, the findings facilitate the delivery mode decision making by HE providers in terms of the congruence of technology integration under each of the three learning experiences. The calibrated assessment measures provide a discussion to extend the practical implication of the current e-Pedagogical practice in the e-Learning industry.
Details
Keywords
Knowledge about cognitive operations and processes (COPs) required for success (1=correct, 0=incorrect) on test items or learning tasks is very important for in-depth…
Abstract
Knowledge about cognitive operations and processes (COPs) required for success (1=correct, 0=incorrect) on test items or learning tasks is very important for in-depth understanding of the nature of student performance and the development of valid instruments for its measurement. A key problem in obtaining such knowledge is the validation of hypothesized COPs and their role in the measurement properties of test items. To provide validation feedback for both normally achieving students and students with learning disabilities, it is important to obtain information on the validity of the COPs for students at different ability levels and individual test items (or tasks). To address this issue, the present chapter introduces a method of estimating the probability for correct performance on individual COPs at fixed ability levels thus providing validity information across ability levels and individual test items. When item response theory (IRT) estimates of the item parameters are known (e.g., in a test bank of IRT calibrated items or published research), the proposed validation method does not require information about raw (or ability) scores of examinees. This method is illustrated for algebra test items and reading comprehension test items calibrated in IRT.
Pravin Chopade, Michael Yudelson, Benjamin Deonovic and Alina A. von Davier
This chapter focuses on the state-of-the-art modeling approaches used in Intelligent Tutoring Systems (ITSs) and the frameworks for researching and operationalizing individual and…
Abstract
This chapter focuses on the state-of-the-art modeling approaches used in Intelligent Tutoring Systems (ITSs) and the frameworks for researching and operationalizing individual and group models of performance, knowledge, and interaction. We adapt several ITS methodologies to model team performance as well as individuals’ performance of the team members. We briefly describe the point processes proposed by von Davier and Halpin (2013), and we also introduce the Competency Architecture for Learning in teaMs (CALM) framework, an extension of the Generalized Intelligent Framework for Tutoring (GIFT) (Sottilare, Brawner, Goldberg, & Holden, 2012) to be used for team settings.
Details
Keywords
Colin Dingler, Alina A. von Davier and Jiangang Hao
Increased interest in team dynamics has resulted in new methods for measuring teamwork over time. The primary purpose of this chapter is to provide a survey of recent developments…
Abstract
Purpose
Increased interest in team dynamics has resulted in new methods for measuring teamwork over time. The primary purpose of this chapter is to provide a survey of recent developments in teamwork/collaboration measurement in an educational context. Key topics include conceptual frameworks, large-scale assessments, and innovative measurement techniques.
Methodology/approach
A range of methods for collecting and analyzing teamwork data are discussed, and five frameworks for measuring collaborative problem solving (CPS) over time are compared. Frameworks from Programme for International Student Assessment (PISA), Assessment and Teaching of 21st Century Skills (ATC21S) project, Educational Testing Service (ETS), ACT, and von Davier and Halpin (2013) are discussed. Results of assessments developed from these frameworks are also considered.
Social/practical implications
New techniques for measuring team dynamics over time have great potential to improve education and work outcomes. Preliminary results of the assessments developed from these frameworks show that important advances in teamwork measurement have been enabled by innovative task designs, data-mining techniques, and novel applications of stochastic models.
Originality/value
This novel overview and comparison of interdisciplinary approaches will help to indicate where progress has been made and what challenges are ahead.
Details
Keywords
Gerald Tindal and Joseph F.T. Nese
We write this chapter using a historical discourse, both in the chronology of research and in the development that has occurred over the years with curriculum-based measurement…
Abstract
We write this chapter using a historical discourse, both in the chronology of research and in the development that has occurred over the years with curriculum-based measurement (CBM). More practically, however, we depict the chronology in terms of the sequence of decisions that educators make as they provide special services to students with disabilities. In the first part of the chapter, we begin with a pair of seminal documents that were written in the late 1970s to begin the story of CBM. In the second part of the chapter, we begin with the first decision an educator needs to make in providing special services and then we continue through the chronology of decisions to affect change in learning for individual students. In the end, we conclude with the need to integrate these decisions with multiple references for interpreting data: normative to allocate resources, criterion to diagnose skill deficits, and individual to evaluate instruction.
The present chapter addresses a topic that is of growing interest – namely, the exploration of alternative item response theory (IRT) models for noncognitive assessment. Previous…
Abstract
The present chapter addresses a topic that is of growing interest – namely, the exploration of alternative item response theory (IRT) models for noncognitive assessment. Previous research in the assessment of trait emotional intelligence (or “trait emotional self-efficacy”) has been limited to traditional psychometric techniques (e.g., classical test theory) under the notion of a dominance response processes describing the relationship between individuals' latent characteristics and individuals' response selection. The present study, presents the first unfolding IRT modeling effort in the general field of emotional intelligence (EI). We applied the Generalized Graded Unfolding Model (GGUM) in order to evaluate the response process and the item properties on the short form of the trait emotional intelligence questionnaire (TEIQue-SF). A sample of 866 participants completed the English version of the TEIQue-SF. Results suggests that the GGUM has an adequate fit to the data. Furthermore, inspection of the test information and standard error functions revealed that the TEIQue-SF is accurate for low and middle scores on the construct; however several items had low discrimination parameters. Implications for the benefits of unfolding models in the assessment of trait EI are discussed.