Search results

1 – 10 of over 53000
To view the access options for this content please click here
Article

Lars Lyberg, Kristen Cibelli Hibben and Beth-Ellen Pennell

Surveys in multinational, multiregional and multicultural contexts (or “3MC” surveys) are becoming increasingly important to global and regional decision-making and theory…

Abstract

Purpose

Surveys in multinational, multiregional and multicultural contexts (or “3MC” surveys) are becoming increasingly important to global and regional decision-making and theory building. To serve this purpose, the surveys need to be well managed, with an awareness of key sources of survey error and how to minimize them, mechanisms in place to control the implementation process and an ability to intervene in that process when necessary in a spirit of continuous improvement (Pennell et al., 2017). One key approach for managing and assessing the quality of 3MC surveys is the total survey error (TSE) framework and associated survey process quality. This paper aims to examine the application of the TSE framework and survey process quality to the Programme for the International Assessment of Adult Competencies (PIAAC).

Design/methodology/approach

The authors begin with a background on TSE and discuss recent adaptations of TSE and survey process quality for 3MC surveys. They then presents a TSE framework tailored with examples of potential contributions to error for PIAAC and ways to address those through effective quality assurance (QA) and quality control (QC) approaches.

Findings

Overall, the authors find that the design and implementation of the first cycle of PIAAC largely reflect the current best practice for 3MC surveys. However, the authors identify several potential contributions to error that may threaten comparability in PIAAC and ways these could be addressed in the upcoming cycle.

Originality/value

With a view toward continuous improvement, the final section draws on the survey process quality approach adapted from Hansen et al.’s study (2016) to summarize the recommendations in terms of additional QA elements (inputs and activities) and associated QC elements (measures and reports) for PIAAC’s consideration in the next cycle.

Details

Quality Assurance in Education, vol. 26 no. 2
Type: Research Article
ISSN: 0968-4883

Keywords

To view the access options for this content please click here

Abstract

Details

Travel Survey Methods
Type: Book
ISBN: 978-0-08-044662-2

Abstract

Details

Transport Survey Methods
Type: Book
ISBN: 978-1-78-190288-2

To view the access options for this content please click here
Article

Gosia Ludwichowska, Jenni Romaniuk and Magda Nenycz-Thiel

Despite the growing availability of scanner-panel data, surveys remain the most common and inexpensive method of gathering marketing metrics. The purpose of this paper is…

Abstract

Purpose

Despite the growing availability of scanner-panel data, surveys remain the most common and inexpensive method of gathering marketing metrics. The purpose of this paper is to explore the size, direction and correction of response errors in retrospective reports of category buying.

Design/methodology/approach

Self-reported purchase frequency data were validated using British household panel records and the negative binomial distribution (NBD) in six packaged goods categories. The log likelihood theory and the fit of the NBD model were used to test an approach to adjusting the errors post-data collection.

Findings

The authors found variations in systematic response errors according to buyer type. Specifically, lighter buyers tend to forward telescope their buying episodes. Heavier buyers tend either to over-use a rate-based estimation of once-a-month buying and over-report purchases at multiples of six or to use round numbers. These errors lead to overestimates of penetration and average purchase frequency. Adjusting the aggregate data for the NBD, however, improves the accuracy of these metrics.

Practical implications

In light of the importance of purchase data for decision making, the authors describe the inaccuracy problem in frequency reports and offer practical suggestions regarding the correction of survey data.

Originality/value

Two novel contributions are offered here: an investigation of errors in different buyer groups and use of the NBD in survey accuracy research.

Details

European Journal of Marketing, vol. 51 no. 7/8
Type: Research Article
ISSN: 0309-0566

Keywords

To view the access options for this content please click here
Article

Godson A. Tetteh, Kwasi Amoako-Gyampah and Amoako Kwarteng

Several research studies on Lean Six Sigma (LSS) have been done using the survey methodology. However, the use of surveys often relies on the measurement of variables…

Abstract

Purpose

Several research studies on Lean Six Sigma (LSS) have been done using the survey methodology. However, the use of surveys often relies on the measurement of variables, which cannot be directly observed, with attendant measurement errors. The purpose of this study is to develop a methodological framework consisting of a combination of four tools for identifying and assessing measurement error during survey research.

Design/methodology/approach

This paper evaluated the viability of the framework through an experimental study on the assessment of project management success in a developing country environment. The research design combined a control group, pretest and post-test measurements with structural equation modeling that enabled the assessment of differences between honest and fake survey responses. This paper tested for common method variance (CMV) using the chi-square test for the difference between unconstrained and fully constrained models.

Findings

The CMV results confirmed that there was significant shared variance among the different measures allowing us to distinguish between trait and faking responses and ascertain how much of the observed process measurement is because of measurement system variation as opposed to variation arising from the study’s constructs.

Research limitations/implications

The study was conducted in one country, and hence, the results may not be generalizable.

Originality/value

Measurement error during survey research, if not properly addressed, can lead to incorrect conclusions that can harm theory development. It can also lead to inappropriate recommendations for practicing managers. This study provides findings from a framework developed and assessed in a LSS project environment for identifying faking responses. This paper provides a robust framework consisting of four tools that provide guidelines on distinguishing between fake and trait responses. This tool should be of great value to researchers.

Details

International Journal of Lean Six Sigma, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2040-4166

Keywords

To view the access options for this content please click here
Article

Nina Reynolds and Adamantios Diamantopoulos

Although pretesting is an essential part of the questionnaire design process, the range of methodological work on pretesting issues is limited. The present paper…

Abstract

Although pretesting is an essential part of the questionnaire design process, the range of methodological work on pretesting issues is limited. The present paper concentrates on the effect of the pretest survey method on error detection by contrasting respondents who are interviewed personally with those who receive an impersonal survey method. The interaction between survey method and respondent knowledge of the questionnaire topic is also considered. The findings show that the pretest method does have an effect on the error detection rate of respondents; however, the hypothesised interaction between method and knowledge was not unequivocally supported. The detailed results illustrate which error types are affected by the method used during pretesting. Implications for future research are considered.

Details

European Journal of Marketing, vol. 32 no. 5/6
Type: Research Article
ISSN: 0309-0566

Keywords

To view the access options for this content please click here
Article

Mike Raybould and Liz Fredline

The purpose of this paper is to investigate whether providing additional prompts in a visitor expenditure survey results in higher reported expenditure.

Abstract

Purpose

The purpose of this paper is to investigate whether providing additional prompts in a visitor expenditure survey results in higher reported expenditure.

Design/methodology/approach

Respondents to a self‐completion survey of event visitors were randomly allocated either an aggregated or disaggregated expenditure format in a quasi‐experimental design. ANOVA is used to identify significant differences in mean reported expenditure to the alternative formats.

Findings

The research finds that provision of additional prompts in the expenditure module of a visitor survey results in higher reported expenditures in half the expenditure categories and, most importantly, in total expenditure.

Research limitations/implications

Collection of accurate visitor expenditure data is critical to estimation of the economic benefits of tourism and special events. Over or under estimation of direct expenditures associated with an event may have implications for future investment in the event by public and/or private agencies.

Originality/value

Very few field tests of this fundamental issue in measurement error have been reported in the tourism literature. The few reported examples have tended to report results inconsistent with a priori expectations, although they have been based on very small sample size and therefore are limited by low power. This study is based on a large sample size and produces results consistent with a priori expectations.

Details

International Journal of Event and Festival Management, vol. 3 no. 2
Type: Research Article
ISSN: 1758-2954

Keywords

To view the access options for this content please click here
Article

Julie Anna Guidry

Respondents’ comments to the LibQUAL+™ spring 2001 survey were examined to refine the instrument and reduce non‐sampling error. Using qualitative data analysis software…

Abstract

Respondents’ comments to the LibQUAL+™ spring 2001 survey were examined to refine the instrument and reduce non‐sampling error. Using qualitative data analysis software, Atlas.ti, respondents’ unsolicited e‐mail messages were analyzed. Results showed that the major problem with the survey was its length, which was due to a combination of factors. This information helped the survey designers in reducing the number of library service quality items from 56 to 25 and in addressing technical problems from the Web‐based survey. An in‐depth discussion of the steps followed in conducting the Atlas.ti analysis will also be discussed.

Details

Performance Measurement and Metrics, vol. 3 no. 2
Type: Research Article
ISSN: 1467-8047

Keywords

To view the access options for this content please click here
Article

Matthias von Davier

Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise…

Abstract

Purpose

Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or interruptions of testing sessions, as well as fatigue or lack of motivation to succeed. This paper aims to provide a review of statistical tools based on latent variable modeling approaches extended by explanatory variables that allow detection of survey errors in skill surveys.

Design/methodology/approach

This paper reviews psychometric methods for detecting sources of error in cognitive assessments and questionnaires. Aside from traditional item responses, new sources of data in computer-based assessment are available – timing data from the Programme for the International Assessment of Adult Competencies (PIAAC) and data from questionnaires – to help detect survey errors.

Findings

Some unexpected results are reported. Respondents who tend to use response sets have lower expected values on PIAAC literacy scales, even after controlling for scores on the skill-use scale that was used to derive the response tendency.

Originality/value

The use of new sources of data, such as timing and log-file or process data information, provides new avenues to detect response errors. It demonstrates that large data collections need to better utilize available information and that integration of assessment, modeling and substantive theory needs to be taken more seriously.

Details

Quality Assurance in Education, vol. 26 no. 2
Type: Research Article
ISSN: 0968-4883

Keywords

To view the access options for this content please click here

Abstract

Details

Handbook of Transport Modelling
Type: Book
ISBN: 978-0-08-045376-7

1 – 10 of over 53000