Search results
1 – 10 of over 19000Andy Inett, Grace Wright, Louise Roberts and Anne Sheeran
Offenders with intellectual disability (ID) have been largely neglected in past forensic literature on assessment of dynamic risk factors. The purpose of this paper is to evaluate…
Abstract
Purpose
Offenders with intellectual disability (ID) have been largely neglected in past forensic literature on assessment of dynamic risk factors. The purpose of this paper is to evaluate the predictive validity of the Short-Term Assessment of Risk and Treatability (START), in a sample of males with IDs in a low-secure hospital (n=28).
Design/methodology/approach
A prospective analysis was conducted, with START scores as the predictor variables, and the number of recorded aversive incidents as the outcome measure.
Findings
Receiver operating characteristic analysis demonstrated that total START risk scores had a significant high predictive accuracy for incidents of physical aggression to others (area under the curve (AUC)=0.710, p<0.001) and property damage/theft (AUC=0.730, p<0.001), over a 30-day period, reducing to medium predictive validity over a 90-day period. Medium predictive validity was also identified for incidents of verbal aggression, suicide, self-harm, and stalking and intimidation. START strength scores were also predictive of overt aggression (AUC=0.716), possible reasons for this are explored.
Research limitations/implications
The small sample size limits the generalisability of the findings, and further research is required.
Practical implications
The paper offers preliminary support for the use of the START with ID offenders in low-secure settings. Given the lack of validation of any previous dynamic risk assessment tools, multi-disciplinary teams in such settings now have the option to use a tool which has potentially good validity with an ID population.
Originality/value
This study represents the first attempt to examine the predictive validity of the START with ID offenders, and a step forward in the understanding of dynamic risk factors for violence in this population. The significant predictive relationship with incidents of physical aggression and property damage offers clinicians a preliminary evidence base supporting its use in low-secure settings.
Details
Keywords
Florian Kock, Adiyukh Berbekova, A. George Assaf and Alexander Josiassen
The purpose of this paper, a critical reflection, is twofold. First, by comprehensively reviewing scale development procedures in hospitality research, a concerning lack of…
Abstract
Purpose
The purpose of this paper, a critical reflection, is twofold. First, by comprehensively reviewing scale development procedures in hospitality research, a concerning lack of nomological validity testing is demonstrated. Second, the need for nomological validity testing is discussed and both conceptually and empirically reasoned.
Design/methodology/approach
This research systematically reviews scale development studies in three leading hospitality journals, including Cornell Hospitality Quarterly, International Journal of Contemporary Hospitality Management and International Journal of Hospitality Management over ten years (2012–2021) to analyze the completeness of scale development procedures. Specifically, the authors evaluate whether the reviewed studies engage in testing the nomological and predictive validity of the newly developed measures.
Findings
The results indicate a concerning gap in the current practices in hospitality research. Specifically, only 33.3% of the examined studies assess nomological validity. These findings collectively underscore the need for improving the comprehensiveness of scale development processes in hospitality research.
Research limitations/implications
The study offers important implications for hospitality researchers. The paper provides an extensive discussion on the importance and benefits of testing for nomological validity in scale development studies, contributing to the completeness and consistency of scale development procedures in the hospitality discipline.
Originality/value
This research critically assesses prevalent, and widely accepted, scale development procedures in hospitality research. This research empirically demonstrates the neglect of nomological validity issues in scale development practices in hospitality research. Scale development is an essential scientific practice used to create a research instrument in a field of study, improving our understanding of a specific phenomenon and contributing to knowledge creation. Considering the significance of scale development in advancing the field of hospitality research, the validation procedures involved in the scale development processes are of utmost importance and should be thoroughly applied.
Details
Keywords
François A. Carrillat, Fernando Jaramillo and Jay P. Mulki
The purpose is to investigate, the difference between SERVQUAL and SERVPERF's predictive validity of service quality.
Abstract
Purpose
The purpose is to investigate, the difference between SERVQUAL and SERVPERF's predictive validity of service quality.
Design/methodology/approach
Data from 17 studies containing 42 effect sizes of the relationships between SERVQUAL or SERVPERF with overall service quality (OSQ) are meta‐analyzed.
Findings
Overall, SERVQUAL and SERVPERF are equally valid predictors of OSQ. Adapting the SERVQUAL scale to the measurement context improves its predictive validity; conversely, the predictive validity of SERVPERF is not improved by context adjustments. In addition, measures of services quality gain predictive validity when used in: less individualistic cultures, non‐English speaking countries, and industries with an intermediate level of customization (hotels, rental cars, or banks).
Research limitations/implications
No study, that were using non‐adapted scales were conducted outside of the USA making it impossible to disentangle the impact of scale adaptation vs contextual differences on the moderating effect of language and culture. More comparative studies on the usage of adapted vs non‐adapted scales outside the USA are needed before settling this issue meta‐analytically.
Practical implications
SERVQUAL scales require to be adapted to the study context more so than SERVPERF. Owing to their equivalent predictive validity the choice between SERVQUAL or SERVPERF should be dictated by diagnostic purpose (SERVQUAL) vs a shorter instrument (SERVPERF).
Originality/value
Because of the high statistical power of meta‐analysis, these findings could be considered as a major step toward ending the debate whether SERVPERF is superior to SERVQUAL as an indicator of OSQ.
Awni Zebda, Barney Cargile, Mary Christ, Rick Christ and James Johnston
Auditing researchers have recommended that the use of audit decision models should be subject to cost‐benefit analysis. This paper provides insight into cost‐benefit analysis and…
Abstract
Auditing researchers have recommended that the use of audit decision models should be subject to cost‐benefit analysis. This paper provides insight into cost‐benefit analysis and its shortcomings as a tool for evaluating audit decision models. The paper also identifies and discusses the limitations of other evaluation methods. Finally, the paper suggests the use of model confidence as an alternative to model value and model validity.
In this study the Core4 model is proposed as a new model of leader behaviour.
Abstract
Purpose
In this study the Core4 model is proposed as a new model of leader behaviour.
Design/methodology/approach
Two independent samples were used to test the construct validity of this model in comparison to a seven-factor transformational/transactional leadership model. Next, convergent and discriminant validity of the Core4 model were examined. The Core4 Leadership Questionnaire was also tested for multigroup invariance. Predictive validity of the Core4 model was compared to that of a transformational/transactional model.
Findings
Results showed that the Core4 model better fitted the data than the transformational/transactional model. A seven-factor transformational/transactional model could not be established. The findings supported convergent and discriminant validity. The Core4 Leadership Questionnaire was not completely invariant across manufacturing and service organisations, but seems appropriate for application in different environments. The Core4 model was more strongly related to the criterion variables than a four-factor transformational/transactional leadership model.
Originality/value
This research shows that the Core4 model offers a valid alternative for the transformational/transactional model of leader behaviour.
Details
Keywords
John Alban‐Metcalfe, Beverly Alimo‐Metcalfe and Miranda Hughes
This paper aims to examine empirical evidence of the criterion, construct, and face validity of two processes commonly used in selection – selection interviews and assessment…
Abstract
Purpose
This paper aims to examine empirical evidence of the criterion, construct, and face validity of two processes commonly used in selection – selection interviews and assessment centres (ACs) – in the selection of chairs of primary care trusts.
Design/methodology/approach
A critical review of the literature and an empirical investigation are undertaken.
Findings
Evidence is presented of the reliability and the predictive, construct, and face validity of using a combination of selection interviews and AC methodology in appointments to public office. In the light of the evidence of the potential benefits of using more than one approach, it is suggested that a combination of AC methodology and panel interviews be used in making public sector appointments.
Practical implications
The evidence presented supports the decision of the Appointment Commission to use AC methodology in the selection for positions in public office, and points to ways in which the process could be improved.
Originality/value
The paper provides empirical evidence of the reliability and validity of two methodologies used in selection to posts.
Details
Keywords
Pratyush N. Sharma, Benjamin D. Liengaard, Joseph F. Hair, Marko Sarstedt and Christian M. Ringle
Researchers often stress the predictive goals of their partial least squares structural equation modeling (PLS-SEM) analyses. However, the method has long lacked a statistical…
Abstract
Purpose
Researchers often stress the predictive goals of their partial least squares structural equation modeling (PLS-SEM) analyses. However, the method has long lacked a statistical test to compare different models in terms of their predictive accuracy and to establish whether a proposed model offers a significantly better out-of-sample predictive accuracy than a naïve benchmark. This paper aims to address this methodological research gap in predictive model assessment and selection in composite-based modeling.
Design/methodology/approach
Recent research has proposed the cross-validated predictive ability test (CVPAT) to compare theoretically established models. This paper proposes several extensions that broaden the scope of CVPAT and explains the key choices researchers must make when using them. A popular marketing model is used to illustrate the CVPAT extensions’ use and to make recommendations for the interpretation and benchmarking of the results.
Findings
This research asserts that prediction-oriented model assessments and comparisons are essential for theory development and validation. It recommends that researchers routinely consider the application of CVPAT and its extensions when analyzing their theoretical models.
Research limitations/implications
The findings offer several avenues for future research to extend and strengthen prediction-oriented model assessment and comparison in PLS-SEM.
Practical implications
Guidelines are provided for applying CVPAT extensions and reporting the results to help researchers substantiate their models’ predictive capabilities.
Originality/value
This research contributes to strengthening the predictive model validation practice in PLS-SEM, which is essential to derive managerial implications that are typically predictive in nature.
Details
Keywords
Peter J. Danaher and Vanessa Haddrell
Many different scales have been used to measure customer satisfaction. These scales can be divided into three main groups, being those measuring performance, disconfirmation and…
Abstract
Many different scales have been used to measure customer satisfaction. These scales can be divided into three main groups, being those measuring performance, disconfirmation and satisfaction. Reports on the design and execution of a study of hotel guests in which they were asked to rate the key service attributes of their stay using all three of these measurement scales. Repurchase intention and word‐of‐mouth effects were also measured. Compares the scales on the basis of reliability, convergent and discriminant validity, predictive validity, skewness, face validity and managerial value for directing a quality improvement programme. Shows the disconfirmation scale to be superior to both the performance and satisfaction scales on all these criteria except for predictive validity. In addition, the performance scale was generally better than the satisfaction scale on a number of these criteria.
Details
Keywords
Marilyn A. Sher, Lucy Warner, Anne McLean, Katharyn Rowe and Ernest Gralton
The purpose of this paper is to explore the validity and reliability of the Short-Term Assessment of Risk and Treatability: Adolescent Version (START:AV) to determine if it has…
Abstract
Purpose
The purpose of this paper is to explore the validity and reliability of the Short-Term Assessment of Risk and Treatability: Adolescent Version (START:AV) to determine if it has predictive accuracy in relation to physical aggression, severe verbal aggression, property damage and self-harm, in a medium secure setting. In addition, the authors hoped to provide some of the first descriptive data available for the START:AV among a UK adolescent population in a medium secure adolescent unit.
Design/methodology/approach
The sample consisted of 90 female and male adolescents, with and without developmental disabilities. It was important to explore the measure’s predictive accuracy across specific population groups, such as between males and females, as well as those with developmental disabilities, and those without.
Findings
Some significant relationships were found between the START:AV and adverse outcomes. For instance, total strength and vulnerability scores were predictive for verbal and physical aggression. Differences in predictive validity were evident when comparisons were made between males and females, with relationships being evident amongst the male population only. When splitting the male sample into developmental disability and non-developmental disability groups, significant relationships were found between strength and vulnerability scores and verbal and physical aggression.
Practical implications
A number of practical implications are considered, such as the START:AV is relevant for use with adolescents in hospital settings and the significant inverse relationship between strength scores and negative outcomes supports the importance of considering protective/strength factors when working with at risk youths.
Originality/value
There is currently limited validation data for the START:AV in the UK or elsewhere.
Details
Keywords
Marina Krcmar and Matthew Allen Lapierre
This paper aims to revise an earlier version of a measure used to assess parent–child consumer-based communication to better capture how parents talk with their children about…
Abstract
Purpose
This paper aims to revise an earlier version of a measure used to assess parent–child consumer-based communication to better capture how parents talk with their children about consumer matters.
Design/methodology/approach
Three separate studies were used to revise the measure. The first tested the original measure with parents and children in a supermarket to determine its predictive validity. The second utilized focus groups with parents to refine the measure. The final study sampled 503 parents via MTurk to test the performance of the revised measure regarding reliability and validity.
Findings
The first study found that the original scale did not perform well as it relates to predicting child consumer behavior. The second study used parents to describe in their own words how they talk to their own children about consumer issues. Using these insights, the final study used the redesigned scale and identified four dimensions to the consumer-related family communication patterns instrument: collaborative communication, control communication, product value and commercial truth. These four dimensions had good reliability, convergent validity and predictive validity.
Research limitations/implications
With an updated measure of parent–child consumer-based communication that more closely matches how parents talk to their children about consumer issues, this measure can help researchers understand how children are socialized as consumers.
Originality/value
This study offers researchers a reliable and valid measure of parent–child consumer-based communication that can help inform future studies on this important topic.
Details