Search results1 – 10 of 81
This chapter focuses on the state-of-the-art modeling approaches used in Intelligent Tutoring Systems (ITSs) and the frameworks for researching and operationalizing…
This chapter focuses on the state-of-the-art modeling approaches used in Intelligent Tutoring Systems (ITSs) and the frameworks for researching and operationalizing individual and group models of performance, knowledge, and interaction. We adapt several ITS methodologies to model team performance as well as individuals’ performance of the team members. We briefly describe the point processes proposed by von Davier and Halpin (2013), and we also introduce the Competency Architecture for Learning in teaMs (CALM) framework, an extension of the Generalized Intelligent Framework for Tutoring (GIFT) (Sottilare, Brawner, Goldberg, & Holden, 2012) to be used for team settings.
A statement from Michell (Michell, J., “Normal science, pathological science, and psychometrics”, Theory and Psychology, Vol. 10 No. 5, 2000, pp. 639‐67), “psychometrics…
A statement from Michell (Michell, J., “Normal science, pathological science, and psychometrics”, Theory and Psychology, Vol. 10 No. 5, 2000, pp. 639‐67), “psychometrics is a pathology of science”, is contrasted with conventional definitions provided by leading texts. The key to understanding why Michell has made such a statement is bound up in the definition of measurement that characterises quantification of variables within the natural sciences. By describing the key features of quantitative measurement, and contrasting these with current psychometric practice, it is argued that Michell is correct in his assertion. Three avenues of investigation would seem to follow from this position, each of which, it is suggested, will gradually replace current psychometric test theory, principles, and properties. The first attempts to construct variables that can be demonstrated empirically to possess a quantitative structure. The second proceeds on the basis of using qualitative (non‐quantitatively structured) variable structures and procedures. The third, applied numerics, is an applied methodology whose sole aim is pragmatic utility; it is similar in some respects to current psychometric procedures except that “test theory” can be discarded in favour of simpler tests of observational reliability and validity. Examples are presented of what future practice may look like in each of these areas. It is to be hoped that psychometrics begins to concern itself more with the logic of its measurement, rather than the ever‐increasing complexity of its numerical and statistical operations.
Team cohesion and other team processes are inherently dynamic mechanisms that contribute to team effectiveness. Unfortunately, extant research has typically treated team…
Team cohesion and other team processes are inherently dynamic mechanisms that contribute to team effectiveness. Unfortunately, extant research has typically treated team cohesion and other processes as static, and failed to capture how these processes change over time and the implications of these changes. In this chapter, we discuss the characteristics of team process dynamics and highlight the importance of temporal considerations when measuring team cohesion. We introduce innovative research methods that can be applied to assess and monitor team cohesion and other process dynamics. Finally, we discuss future directions for the research and practical applications of these new methods to enhance our understanding of the dynamics of team cohesion and other processes.
This paper aims to respond to John Rossiter's call for a “Marketing measurement revolution” in the current issue of EJM, as well as providing broader comment on Rossiter's…
This paper aims to respond to John Rossiter's call for a “Marketing measurement revolution” in the current issue of EJM, as well as providing broader comment on Rossiter's C‐OAR‐SE framework, and measurement practice in marketing in general.
The paper is purely theoretical, based on interpretation of measurement theory.
The authors find that much of Rossiter's diagnosis of the problems facing measurement practice in marketing and social science is highly relevant. However, the authors find themselves opposed to the revolution advocated by Rossiter.
The paper presents a comment based on interpretation of measurement theory and observation of practices in marketing and social science. As such, the interpretation is itself open to disagreement.
There are implications for those outside academia who wish to use measures derived from academic work as well as to derive their own measures of key marketing and other social variables.
This paper is one of the few to explicitly respond to the C‐OAR‐SE framework proposed by Rossiter, and presents a number of points critical to good measurement theory and practice, which appear to remain underdeveloped in marketing and social science.
Due to the paucity of research on web-based job applicant screening (i.e. cybervetting), the purpose of the current study was to examine the psychometric properties of…
Due to the paucity of research on web-based job applicant screening (i.e. cybervetting), the purpose of the current study was to examine the psychometric properties of cybervetting, including an examination of the impact of adding structure to the rating process.
Using a mixed-factorial design, 122 supervisors conducted cybervetting evaluations of applicant personality, cognitive ability, written communication skills, professionalism, and overall suitability. Cross-method agreement (i.e. the degree of similarity between cybervetting ratings and other assessment methods), as well as interrater reliability and agreement were examined, and unstructured versus structured cybervetting rating formats were compared.
Cybervetting assessments demonstrated high interrater reliability and interrater agreement, but only limited evidence of cross-method agreement was provided. In addition, adding structure to the cybervetting process did not enhance the psychometric properties of this assessment technique.
This study highlighted that whereas cybervetting raters demonstrated a high degree of consensus in cybervetting-based attributions, there may be concerns regarding assessment accuracy, as cybervetting-based ratings generally differed from applicant test scores and self-assessment ratings. Thus, employers should use caution when utilizing this pre-employment screening technique.
Whereas previous research has suggested that cybervetting ratings demonstrate convergence with other traditional assessments (albeit with relatively small effects), these correlational links do not provide information regarding cross-method agreement or method interchangeability. Thus, this study bridges a crucial gap in the literature by examining cross-method agreement for a variety of job-relevant constructs, as well as empirically testing the impact of adding structure to the cybervetting rating process.
The subject matter of psychology can be illustrated by addressing the questions, ‘who am I?’, ‘what do I spend my time doing?’ and ‘reflecting on my life, where have I come from and where am I going?’. I am a human being and also more generally an animal. I am a biological entity and also a psychological entity. I have an identity and a personality. I am an individual and also a member of a social system. I am similar to other human beings but also different. In some respects I am normal and in other respects abnormal.