Search results

1 – 10 of over 27000
Article
Publication date: 4 November 2014

Nismen Lathif, Muhammad Chishty and Emily Phipps

Diagnosis of Huntington's disease (HD) is with genetic tests and predictive testing for HD has been available for almost two decades. In the age of advancing genetic techniques…

Abstract

Purpose

Diagnosis of Huntington's disease (HD) is with genetic tests and predictive testing for HD has been available for almost two decades. In the age of advancing genetic techniques, the question arises as to how the predictive tests can affect a person, his or her family and relatives, life choices and future. The paper aims to discuss these issues.

Design/methodology/approach

A case study is presented demonstrating the complex issues surrounding genetic testing in HD. Relevant literature was then reviewed to further explore ethical issues linked to predictive testing for HD and also looked into findings on resolving this complex issue.

Findings

Predictive testing in HD gives rise to ethical issues in social, legal, economical and imperatively personal aspects of an individual and society. Education and dispersion of knowledge to general society, regarding the test, its impact and also the illness would be a starting point in an attempt to resolve these issues. Need for counselling and support for patients in this context is vital and hence the imperative need to ensure provisions for standardised training and supply of professionals in this setting. Universal and enforceable framework along the lines of International Huntington Association recommendation should be adopted nationally.

Originality/value

This paper presents a case study with significant value in demonstrating the challenges faced by genetic testing in HD, and provides insight in to this issue significant for all clinicians.

Details

Social Care and Neurodisability, vol. 5 no. 4
Type: Research Article
ISSN: 2042-0919

Keywords

Article
Publication date: 12 February 2024

Florian Kock, Adiyukh Berbekova, A. George Assaf and Alexander Josiassen

The purpose of this paper, a critical reflection, is twofold. First, by comprehensively reviewing scale development procedures in hospitality research, a concerning lack of…

Abstract

Purpose

The purpose of this paper, a critical reflection, is twofold. First, by comprehensively reviewing scale development procedures in hospitality research, a concerning lack of nomological validity testing is demonstrated. Second, the need for nomological validity testing is discussed and both conceptually and empirically reasoned.

Design/methodology/approach

This research systematically reviews scale development studies in three leading hospitality journals, including Cornell Hospitality Quarterly, International Journal of Contemporary Hospitality Management and International Journal of Hospitality Management over ten years (2012–2021) to analyze the completeness of scale development procedures. Specifically, the authors evaluate whether the reviewed studies engage in testing the nomological and predictive validity of the newly developed measures.

Findings

The results indicate a concerning gap in the current practices in hospitality research. Specifically, only 33.3% of the examined studies assess nomological validity. These findings collectively underscore the need for improving the comprehensiveness of scale development processes in hospitality research.

Research limitations/implications

The study offers important implications for hospitality researchers. The paper provides an extensive discussion on the importance and benefits of testing for nomological validity in scale development studies, contributing to the completeness and consistency of scale development procedures in the hospitality discipline.

Originality/value

This research critically assesses prevalent, and widely accepted, scale development procedures in hospitality research. This research empirically demonstrates the neglect of nomological validity issues in scale development practices in hospitality research. Scale development is an essential scientific practice used to create a research instrument in a field of study, improving our understanding of a specific phenomenon and contributing to knowledge creation. Considering the significance of scale development in advancing the field of hospitality research, the validation procedures involved in the scale development processes are of utmost importance and should be thoroughly applied.

Details

International Journal of Contemporary Hospitality Management, vol. 36 no. 10
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 29 November 2019

A. George Assaf and Mike G. Tsionas

This paper aims to present several Bayesian specification tests for both in- and out-of-sample situations.

Abstract

Purpose

This paper aims to present several Bayesian specification tests for both in- and out-of-sample situations.

Design/methodology/approach

The authors focus on the Bayesian equivalents of the frequentist approach for testing heteroskedasticity, autocorrelation and functional form specification. For out-of-sample diagnostics, the authors consider several tests to evaluate the predictive ability of the model.

Findings

The authors demonstrate the performance of these tests using an application on the relationship between price and occupancy rate from the hotel industry. For purposes of comparison, the authors also provide evidence from traditional frequentist tests.

Research limitations/implications

There certainly exist other issues and diagnostic tests that are not covered in this paper. The issues that are addressed, however, are critically important and can be applied to most modeling situations.

Originality/value

With the increased use of the Bayesian approach in various modeling contexts, this paper serves as an important guide for diagnostic testing in Bayesian analysis. Diagnostic analysis is essential and should always accompany the estimation of regression models.

Details

International Journal of Contemporary Hospitality Management, vol. 32 no. 4
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 7 June 2019

Minji Kim and Joseph N. Cappella

In the field of public relations and communication management, message evaluation has been one of the starting points for evaluation and measurement research at least since the…

1009

Abstract

Purpose

In the field of public relations and communication management, message evaluation has been one of the starting points for evaluation and measurement research at least since the 1970s. Reliable and valid message evaluation has a central role in message effects research and campaign design in other disciplines as well as communication science. The purpose of this paper is to offer a message testing protocol to efficiently acquire valid and reliable message evaluation data.

Design/methodology/approach

A message testing protocol is described in terms of how to conceptualize and evaluate the content and format of messages, in terms of procedures for acquiring and testing messages and in terms of using efficient, reliable and valid measures of perceived message effectiveness (PME) and perceived argument strength (PAS). The evidence supporting the reliability and validity of PME and PAS measures is reviewed.

Findings

The message testing protocol developed and reported is an efficient, reliable and valid approach for testing large numbers of messages.

Research limitations/implications

Researchers’ ability to select candidate messages for subsequent deeper testing, for various types of communication campaigns, and for research in theory testing contexts is facilitated. Avoiding the limitations of using a single instance of a message to represent a category (also known as the case-category confound) is reduced.

Practical implications

Communication campaign designers are armed with tools to assess messages and campaign concepts quickly and efficiently, reducing pre-testing time and resources while identifying “best-in-show” examples and prototypes.

Originality/value

Message structures are conceptualized in terms of content and format features using theoretically driven constructs. Measures of PAS and PME are reviewed for their reliability, construct and predictive validity, finding that the measures are acceptable surrogates for actual effectiveness for a wide variety of messages and applications. Coupled with procedures that reduce confounding by randomly nesting messages within respondents and respondents to messages, the measures used and protocol deployed offer an efficient and utilitarian approach to message testing and modeling.

Details

Journal of Communication Management, vol. 23 no. 3
Type: Research Article
ISSN: 1363-254X

Keywords

Article
Publication date: 14 July 2022

Pratyush N. Sharma, Benjamin D. Liengaard, Joseph F. Hair, Marko Sarstedt and Christian M. Ringle

Researchers often stress the predictive goals of their partial least squares structural equation modeling (PLS-SEM) analyses. However, the method has long lacked a statistical…

3018

Abstract

Purpose

Researchers often stress the predictive goals of their partial least squares structural equation modeling (PLS-SEM) analyses. However, the method has long lacked a statistical test to compare different models in terms of their predictive accuracy and to establish whether a proposed model offers a significantly better out-of-sample predictive accuracy than a naïve benchmark. This paper aims to address this methodological research gap in predictive model assessment and selection in composite-based modeling.

Design/methodology/approach

Recent research has proposed the cross-validated predictive ability test (CVPAT) to compare theoretically established models. This paper proposes several extensions that broaden the scope of CVPAT and explains the key choices researchers must make when using them. A popular marketing model is used to illustrate the CVPAT extensions’ use and to make recommendations for the interpretation and benchmarking of the results.

Findings

This research asserts that prediction-oriented model assessments and comparisons are essential for theory development and validation. It recommends that researchers routinely consider the application of CVPAT and its extensions when analyzing their theoretical models.

Research limitations/implications

The findings offer several avenues for future research to extend and strengthen prediction-oriented model assessment and comparison in PLS-SEM.

Practical implications

Guidelines are provided for applying CVPAT extensions and reporting the results to help researchers substantiate their models’ predictive capabilities.

Originality/value

This research contributes to strengthening the predictive model validation practice in PLS-SEM, which is essential to derive managerial implications that are typically predictive in nature.

Details

European Journal of Marketing, vol. 57 no. 6
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 30 August 2013

Brent Rollins, Shravanan Ramakrishnan and Matthew Perri

Predictive genetic tests (PGTs) have greatly increased their presence in the market, and, much like their pharmaceutical peers, companies offering PGTs have increasingly used…

1417

Abstract

Purpose

Predictive genetic tests (PGTs) have greatly increased their presence in the market, and, much like their pharmaceutical peers, companies offering PGTs have increasingly used direct‐to‐consumer advertising as part of their promotional strategy. Given many PGTs are available without a prescription or physician order and the lack of empirical research examining the effects of PGT‐DTC, this paper seeks to examine consumer attitudes, intentions, and behavior in response to a PGT‐DTC ad with and without a prescription requirement.

Design/methodology/approach

A single factor, between subjects online survey design with the presence or absence of a prescription requirement as the experimental variable was used to evaluate consumers' attitudes, intentions, and behavior in response to a predictive genetic test DTC advertisement. A minimum sample size of 198 was determined a priori and 206 surveys were completed within five hours of deployment to 600 randomly selected general consumer participants for a response rate of 34.3 percent (206/600), with 106 in the prescription requirement group and 100 in the non‐prescription group. Descriptive statistics, t‐tests, and chi‐square techniques were used to examine the various dependent variables (consumer attitudes, behavioral intentions, and the pre‐defined behavior measure) and their differences.

Findings

Overall, consumers hold favorable attitudes to PGT‐DTC ads, but did not intend to engage in physician discussion, take the test or perform information search behavior. The effect of a prescription requirement was not significant, as no differences were seen with the attitude and behavioral intention dependent variables.

Originality/value

At this still relatively young point in the PGT cycle, consumers still seem to be skeptical about the value of predictive genetic tests and their associated DTC advertisements.

Details

International Journal of Pharmaceutical and Healthcare Marketing, vol. 7 no. 3
Type: Research Article
ISSN: 1750-6123

Keywords

Article
Publication date: 8 February 2016

Byron Sharp and Nicole Hartnett

– This paper aims to reflect on the generalisability of the predictive validity test of the Persuasion Principles Index (PPI) conducted by Armstrong et al. (2016).

1960

Abstract

Purpose

This paper aims to reflect on the generalisability of the predictive validity test of the Persuasion Principles Index (PPI) conducted by Armstrong et al. (2016).

Design/methodology/approach

Different aspects of the test are considered, such as the sample of ads, the dependent variable and the comparability of the methods used to predict effectiveness, in terms of how relevant these are to real-world advertising testing.

Findings

The sample of ads and the testing procedure may have contributed to the success of the PPI predictions over the other copy-testing methods. The sample of print ads does not bear a close resemblance to current advertising. The competing copy tests do not represent modern advertising copy testing.

Research/limitations/implications

More research is needed to test the validity of the principles and the predictive accuracy of the PPI across a range of conditions (e.g. different ads, media, products and cultures). Testing against advertising sales effectiveness would be the ideal next step.

Practical/implications

It certainly seems the index method has the potential to help advertisers make better decisions regarding what executions to support, for high-involvement products at least. Given the accessibility of the software, it should be easy and cost effective for advertisers to trial the PPI.

Originality/value

This commentary directs researchers to the real-world conditions under which advertising pre-tests need to be evaluated.

Details

European Journal of Marketing, vol. 50 no. 1/2
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 6 August 2020

Wynne Chin, Jun-Hwa Cheah, Yide Liu, Hiram Ting, Xin-Jean Lim and Tat Huei Cham

Partial least squares structural equation modeling (PLS-SEM) has become popular in the information systems (IS) field for modeling structural relationships between latent…

4213

Abstract

Purpose

Partial least squares structural equation modeling (PLS-SEM) has become popular in the information systems (IS) field for modeling structural relationships between latent variables as measured by manifest variables. However, while researchers using PLS-SEM routinely stress the causal-predictive nature of their analyses, the model evaluation assessment relies exclusively on criteria designed to assess the path model's explanatory power. To take full advantage of the purpose of causal prediction in PLS-SEM, it is imperative for researchers to comprehend the efficacy of various quality criteria, such as traditional PLS-SEM criteria, model fit, PLSpredict, cross-validated predictive ability test (CVPAT) and model selection criteria.

Design/methodology/approach

A systematic review was conducted to understand empirical studies employing the use of the causal prediction criteria available for PLS-SEM in the database of Industrial Management and Data Systems (IMDS) and Management Information Systems Quarterly (MISQ). Furthermore, this study discusses the details of each of the procedures for the causal prediction criteria available for PLS-SEM, as well as how these criteria should be interpreted. While the focus of the paper is on demystifying the role of causal prediction modeling in PLS-SEM, the overarching aim is to compare the performance of different quality criteria and to select the appropriate causal-predictive model from a cohort of competing models in the IS field.

Findings

The study found that the traditional PLS-SEM criteria (goodness of fit (GoF) by Tenenhaus, R2 and Q2) and model fit have difficulty determining the appropriate causal-predictive model. In contrast, PLSpredict, CVPAT and model selection criteria (i.e. Bayesian information criterion (BIC), BIC weight, Geweke–Meese criterion (GM), GM weight, HQ and HQC) were found to outperform the traditional criteria in determining the appropriate causal-predictive model, because these criteria provided both in-sample and out-of-sample predictions in PLS-SEM.

Originality/value

This research substantiates the use of the PLSpredict, CVPAT and the model selection criteria (i.e. BIC, BIC weight, GM, GM weight, HQ and HQC). It provides IS researchers and practitioners with the knowledge they need to properly assess, report on and interpret PLS-SEM results when the goal is only causal prediction, thereby contributing to safeguarding the goal of using PLS-SEM in IS studies.

Details

Industrial Management & Data Systems, vol. 120 no. 12
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 17 January 2020

Catherine Prentice

This study aims to draw on the complexity theory and uses a non-an asymmetrical method – fuzzy-set qualitative comparative analysis (fsQCA) to test the core tenets of complexity…

Abstract

Purpose

This study aims to draw on the complexity theory and uses a non-an asymmetrical method – fuzzy-set qualitative comparative analysis (fsQCA) to test the core tenets of complexity theory, namely, asymmetry, equifinality and causal complexity and valence reversals or conjunction with a focus on testing the relationships between service quality, customer satisfaction and loyalty. Case outcome forecasting accuracy rather than relationships are tested in asymmetric testing.

Design/methodology/approach

Both symmetrical (structural equation modelling or SEM) and non-symmetrical (fsQCA) methods were used to test the proposed relationships (symmetrical testing) and case outcome forecasting accuracy (asymmetric testing). The former was used as a comparison. The study setting was in Australian airports. The data were collected from departure passengers.

Findings

The results from SEM and fsQCA differ substantially. The former provides very simplistic findings of variable directional relationships; whereas the latter presents asymmetrical, equifinal and conjunctional relationships regarding service quality, customer satisfaction and behavioural intentions. These findings support the core tenets of the complexity theory.

Research limitations/implications

The study findings conform to the complexity theory that indicates relationships between variables can be nonlinear and the same causes can produce different effects. The findings suggest the outcomes of interest often result from combined antecedent conditions rather than a single causal factor. The study confirms that asymmetrical thinking relies on Boolean algebra and set theory principles.

Originality/value

This study uses both symmetrical and asymmetrical methods to reveal the nuanced information about the relationship that has been tested primarily using symmetrical methods.

Details

Journal of Services Marketing, vol. 34 no. 2
Type: Research Article
ISSN: 0887-6045

Keywords

Article
Publication date: 1 November 1996

Awni Zebda, Barney Cargile, Mary Christ, Rick Christ and James Johnston

Auditing researchers have recommended that the use of audit decision models should be subject to cost‐benefit analysis. This paper provides insight into cost‐benefit analysis and…

Abstract

Auditing researchers have recommended that the use of audit decision models should be subject to cost‐benefit analysis. This paper provides insight into cost‐benefit analysis and its shortcomings as a tool for evaluating audit decision models. The paper also identifies and discusses the limitations of other evaluation methods. Finally, the paper suggests the use of model confidence as an alternative to model value and model validity.

Details

Managerial Finance, vol. 22 no. 11
Type: Research Article
ISSN: 0307-4358

1 – 10 of over 27000