Search results

1 – 10 of 506
Article
Publication date: 1 July 2012

Sungho Cho and Joon-Ho Kang

This empirical study examines the psychometric comparability of Aaker's Brand Personality Scale (Aaker, 1997) in sponsorship matching. It employs a structural validation protocol…

Abstract

This empirical study examines the psychometric comparability of Aaker's Brand Personality Scale (Aaker, 1997) in sponsorship matching. It employs a structural validation protocol - the congenerity test (Ohanion, 1990) - to investigate the extent to which sports events and sponsors can be psychometrically matched. The results show that sports events and sponsors are comparable only in terms of limited numbers of the dimensions of the a priori scale. Theoretical and practical implications are discussed.

Details

International Journal of Sports Marketing and Sponsorship, vol. 13 no. 4
Type: Research Article
ISSN: 1464-6668

Keywords

Article
Publication date: 17 February 2012

James Reardon and Chip Miller

Methodological advances in cross‐cultural scale development have addressed many concerns regarding the development of valid scales. However, several issues remain to be examined …

3798

Abstract

Purpose

Methodological advances in cross‐cultural scale development have addressed many concerns regarding the development of valid scales. However, several issues remain to be examined – including the potential problems of using language to measure communication phenomena using self‐reported studies and addressing the effect of response scale type on the validity of resultant measures. The purpose of this paper is to expand the cross‐cultural measurement paradigm by comprehensively examining these issues and suggesting a new response scale type that may potentially produce more valid cross‐cultural measures of communication‐based phenomena.

Design/methodology/approach

Measures of Hall's concept of context were developed using three types of response scales – Likert, semantic differential, and conceptual metaphoric. The last response scale type is developed within this research. Samples were gathered in 23 countries using existing scale development procedures. The response scales were compared for psychometric properties and validity based on reliability, metric invariance, response styles, and face validity.

Findings

Overall all three response scale types adequately measured the construct of context. The newly developed conceptual metaphoric scale performed marginally better on most comparative metrics.

Practical implications

International marketers measure a host of variables related to culture for many purposes. The new response scale type may provide slightly better measures to more accurately reflect communication based constructs – many of which are central to marketing.

Originality/value

The findings indicate that the new conceptual metaphoric response scale type may overcome some existing biases inherent in standard response scale types. In addition, this research provides the first viable and parsimonious measure of Hall's concept of context.

Details

International Marketing Review, vol. 29 no. 1
Type: Research Article
ISSN: 0265-1335

Keywords

Article
Publication date: 1 October 2013

K. Damon Aiken, Richard M Campbell and Eric C Koch

This paper investigates the brand personality dimensions associated with professional sports teams and the personality dimensions of the cities they call home. Two studies…

Abstract

This paper investigates the brand personality dimensions associated with professional sports teams and the personality dimensions of the cities they call home. Two studies evaluate ten National Football League teams and their respective homes, with data collected from 434 respondents from five disparate locations throughout the United States. Findings suggest that in general people ascribe similar personality traits to both the team and their city, with correlations stronger among avid NFL fans. It appears that city 'insiders' (i.e. residents) tend to hold more positive views than city 'outsiders'.

Details

International Journal of Sports Marketing and Sponsorship, vol. 15 no. 1
Type: Research Article
ISSN: 1464-6668

Keywords

Article
Publication date: 3 April 2018

Matthias von Davier

Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or…

Abstract

Purpose

Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or interruptions of testing sessions, as well as fatigue or lack of motivation to succeed. This paper aims to provide a review of statistical tools based on latent variable modeling approaches extended by explanatory variables that allow detection of survey errors in skill surveys.

Design/methodology/approach

This paper reviews psychometric methods for detecting sources of error in cognitive assessments and questionnaires. Aside from traditional item responses, new sources of data in computer-based assessment are available – timing data from the Programme for the International Assessment of Adult Competencies (PIAAC) and data from questionnaires – to help detect survey errors.

Findings

Some unexpected results are reported. Respondents who tend to use response sets have lower expected values on PIAAC literacy scales, even after controlling for scores on the skill-use scale that was used to derive the response tendency.

Originality/value

The use of new sources of data, such as timing and log-file or process data information, provides new avenues to detect response errors. It demonstrates that large data collections need to better utilize available information and that integration of assessment, modeling and substantive theory needs to be taken more seriously.

Details

Quality Assurance in Education, vol. 26 no. 2
Type: Research Article
ISSN: 0968-4883

Keywords

Book part
Publication date: 12 November 2021

Kaylee Litson and David Feldon

There is currently a great deal of attention in psychometric and statistical methods on ensuring measurement invariance when examining measures across time or populations. When…

Abstract

There is currently a great deal of attention in psychometric and statistical methods on ensuring measurement invariance when examining measures across time or populations. When measurement invariance is established, changes in scores over time or across groups can be attributed to changes in the construct rather than changes in reaction to or interpretation of the measurement instrument. When measurement in not invariant, it is possible that measured differences are due to the measurement instrument itself and not to the underlying phenomenon of interest. This chapter discusses the importance of establishing measurement invariance specifically in postsecondary settings, where it is anticipated that individuals' perspectives will change over time as a function of their higher education experiences. Using examples from several measures commonly used in higher education research, the concepts and processes underlying tests of measurement invariance are explained and analyses are interpreted using data from a US-based longitudinal study on bioscience PhD students. These measures include sense of belonging over time and across groups, mental well-being over time, and perceived mentorship quality over time. The chapter ends with a discussion about the implications of longitudinal and group measurement invariance as an important conceptual property for moving forward equitable, reproducible, and generalizable quantitative research in higher education. Invariance methods may further be relevant for addressing criticisms about quantitative analyses being biased toward majority populations that have been discussed by critical theorists engaging quantitative research strategies.

Details

Theory and Method in Higher Education Research
Type: Book
ISBN: 978-1-80262-441-0

Keywords

Book part
Publication date: 6 March 2009

Thomas Salzberger, Hartmut H. Holzmüller and Anne Souchon

Measures are comparable if and only if measurement equivalence has been demonstrated. Although comparability and equivalence of measures are sometimes used interchangeably, we…

Abstract

Measures are comparable if and only if measurement equivalence has been demonstrated. Although comparability and equivalence of measures are sometimes used interchangeably, we advocate a subtle but important difference in meaning. Comparability implies that measures from one group can be compared with measures from another group. It is a property of the measures, which is given or not. In particular, comparability presumes valid measures within each group compared. Measurement equivalence, by contrast, refers to the way measures are derived and estimated. It is intrinsically tied to the underlying theory of measurement. Thus, measurement equivalence cannot be dealt with in isolation. Its assessment has to be incorporated into the theoretical framework of measurement. Measurement equivalence is closely connected to construct validity for it refers to the way manifest indicators are related to the latent variable, within a particular culture and across different cultures. From this it follows that equivalence cannot, or should not, be treated as a separate issue but as a constitutive element of validity. A discussion of measurement equivalence without addressing validity would be incomplete.

Details

New Challenges to International Marketing
Type: Book
ISBN: 978-1-84855-469-6

Article
Publication date: 3 April 2018

Kentaro Yamamoto and Mary Louise Lennon

Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement…

Abstract

Purpose

Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement constructs. Such data result in the loss of comparability across participating countries and, in the case of cyclical surveys, between past and present surveys. This paper aims to describe how data fabrication can be understood in the context of the complex processes involved in the collection, handling, submission and analysis of large-scale assessment data. The actors involved in those processes, and their possible motivations for data fabrication, are also elaborated.

Design/methodology/approach

Computer-based assessments produce new types of information that enable us to detect the possibility of data fabrication, and therefore the need for further investigation and analysis. The paper presents three examples that illustrate how data fabrication was identified and documented in the Programme for the International Assessment of Adult Competencies (PIAAC) and the Programme for International Student Assessment (PISA) and discusses the resulting remediation efforts.

Findings

For two countries that participated in the first round of PIAAC, the data showed a subset of interviewers who handled many more cases than others. In Case 1, the average proficiency for respondents in those interviewers’ caseloads was much higher than expected and included many duplicate response patterns. In Case 2, anomalous response patterns were identified. Case 3 presents findings based on data analyses for one PISA country, where results for human-coded responses were shown to be highly inflated compared to past results.

Originality/value

This paper shows how new sources of data, such as timing information collected in computer-based assessments, can be combined with other traditional sources to detect fabrication.

Details

Quality Assurance in Education, vol. 26 no. 2
Type: Research Article
ISSN: 0968-4883

Keywords

Article
Publication date: 19 May 2023

Akilimali Ndatabaye Ephrem and McEdward Murimbika

As good as existing measurements of entrepreneurial potential (EP) may appear in the literature, they are fragmented, suffer from the lack of theory integration and clarity, are…

Abstract

Purpose

As good as existing measurements of entrepreneurial potential (EP) may appear in the literature, they are fragmented, suffer from the lack of theory integration and clarity, are inadequately specified and assessed and the dimensions are unordered by importance. These limitations of EP metrics have hindered entrepreneurial practice and theory advancement. There is a risk of atomistic evolution of the topic among “siloed” scholars and room for repetitions without real progress. The purpose of this paper was to take stock of existing measurements from which the authors developed a new instrument that is brief and inclusive.

Design/methodology/approach

The authors followed several steps to develop and validate the new instrument, including construct domain name specification, literature review, structured interviews with entrepreneurs, face validation by experts, semantic validation and statistical validation after two waves of data collected on employee and entrepreneur samples.

Findings

A clear operational definition of EP is proposed and serves as a starting point towards a unified EP theory. The new EP instrument is made up of 34 items classified into seven dimensions, which in order of importance are proactive innovativeness, management skill, calculated risk-taking, social skill, financial literacy, entrepreneurial competencies prone to cognitive and heuristic biases and bricolage. The authors provide evidence for reliability and validity of the new instrument.

Research limitations/implications

Although a model is not the model, the authors discuss several ways in which the new measurement model can be used by different stakeholders to promote entrepreneurship.

Originality/value

The authors discuss the domain representativeness of the new scale and argue that the literature can meaningfully benefit from a non-fuzzy approach to what makes the EP of an individual. By developing a new EP instrument, the authors set an important pre-condition for advancing entrepreneurial theory and practice.

Details

Journal of Research in Marketing and Entrepreneurship, vol. 26 no. 1
Type: Research Article
ISSN: 1471-5201

Keywords

Article
Publication date: 1 October 1996

Naresh K. Malhotra, James Agarwal and Mark Peterson

Notes that methodological problems are hampering the growth of cross‐cultural marketing research and presents a review of methodological issues to address these problems…

17147

Abstract

Notes that methodological problems are hampering the growth of cross‐cultural marketing research and presents a review of methodological issues to address these problems. Organizes these issues around a six‐step framework which includes elements such as problem definition, the development of an approach and research design formulation. Notes that the marketing research problem can be defined by comparing the phenomenon or behaviour in separate cultural contexts and eliminating the influence of the self‐reference criterion. Discusses issues in data analysis such as treatment of outliers and standardization of data. Concludes with an interpretation of results and report presentation.

Details

International Marketing Review, vol. 13 no. 5
Type: Research Article
ISSN: 0265-1335

Keywords

Article
Publication date: 6 January 2021

Emily Fulcher and Helen Pote

Since its initial development, numerous mental health literacy (MHL) definitions and associated measures have been created which have yet to be adequately evaluated. This paper…

Abstract

Purpose

Since its initial development, numerous mental health literacy (MHL) definitions and associated measures have been created which have yet to be adequately evaluated. This paper aims to evaluate the psychometric properties of global MHL measures with the aim of identifying the most valid, reliable, responsive and interpretable measure.

Design/methodology/approach

A systematic review was conducted of studies that evaluated global MHL measures against at least one of the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) taxonomy properties; validity, reliability, responsivity or interpretability.

Findings

In total, 13 studies were identified which examined the psychometric properties of 7 MHL measures. Two of these seven measures were vignette format and the remaining five measures were questionnaires. The mental health promoting knowledge-10 and the multicomponent mental health literacy measure were the most psychometrically robust global MHL measures as they had the most psychometric properties rated as adequate. Both were shown to have adequate structural validity, internal consistency and construct validity. The two vignette measures, the MHL tool for the workplace and the vignette MHL measure, were both shown to only have adequate evidence for construct validity.

Originality/value

The current study is the first to systematically review research that evaluated the psychometric properties of global measures of MHL.

1 – 10 of 506