Search results

1 – 10 of over 18000
Article
Publication date: 12 June 2007

Jenny A. Darby

The purpose of this research is to examine participants' response rate on dual style training course evaluation forms. These combine structured and open‐ended formats. Pencil and…

1196

Abstract

Purpose

The purpose of this research is to examine participants' response rate on dual style training course evaluation forms. These combine structured and open‐ended formats. Pencil and paper forms have a long history of use by trainers in business and commerce and more recently in education. Research methods texts tend to have neglected the issue of response rates with this type of form.

Design/methodology/approach

Approximately 2,000 course participants attending 28 courses completed evaluation forms. These were designed with a series of structured responses scales followed by a section for open‐ended comments.

Findings

It was found that the completion rate for the open‐ended sections was low and thus validity was suspect. Various explanations were offered for this. Subsequently when a redesigned evaluation form was administered to a further 1,641 course participants it was found that response rates increased dramatically when open‐ended sections were placed earlier in the questionnaire.

Practical implications

Indicates ways in which course evaluation forms can be redesigned to increase response rates for open‐ended sections and thus improve the validity of any findings.

Originality/value

Provides information about response rates neglected by most methodology texts concerning the design of training evaluation questionnaires which include open‐ended sections.

Details

Journal of European Industrial Training, vol. 31 no. 5
Type: Research Article
ISSN: 0309-0590

Keywords

Abstract

Details

Contingent Valuation: A Critical Assessment
Type: Book
ISBN: 978-1-84950-860-5

Article
Publication date: 1 March 2006

Jenny A. Darby

The purpose of this study is to examine factors which influence responses on open‐ended evaluations of training courses.

2121

Abstract

Purpose

The purpose of this study is to examine factors which influence responses on open‐ended evaluations of training courses.

Design/methodology/approach

Course participants completed open‐ended evaluation forms about their experience on a course. These consisted of 377 senior teachers attending a training programme dealing with child abuse. The course was repeated 17 times. The second training programme concerned teaching skills. This was attended by 231 postgraduates. The course was repeated 25 times.

Findings

Responses on open‐ended evaluation forms tended to be favourable with reference to “human related factors” and unfavourable when referring to “hygiene factors”.

Practical implications

It is suggested the way people complete evaluation forms is partly a reflection of their desire to see themselves as acting in a socially desirable manner. Interpretations made from such forms about the effectiveness or merits of any course should take this into account.

Originality/value

Provides a lot more information about open‐ended evaluations than is provided in the research methods texts. It is suggested that those who use open‐ended evaluations need to be particularly careful when they interpret them.

Details

Journal of European Industrial Training, vol. 30 no. 3
Type: Research Article
ISSN: 0309-0590

Keywords

Abstract

Details

Contingent Valuation: A Critical Assessment
Type: Book
ISBN: 978-1-84950-860-5

Article
Publication date: 6 October 2021

Hai-Anh Tran, Yuliya Strizhakova, Hongfei Liu and Ismail Golgeci

This paper aims to examine counterfactual thinking as a key mediator of the effects of failed recovery (vs. failed delivery) on negative electronic word-of-mouth (eWOM). The…

Abstract

Purpose

This paper aims to examine counterfactual thinking as a key mediator of the effects of failed recovery (vs. failed delivery) on negative electronic word-of-mouth (eWOM). The authors further investigate the effectiveness of using recovery co-creation in minimizing customers’ counterfactual thinking.

Design/methodology/approach

This research includes textual analysis of online reviews (Study 1) and three scenario-based experiments (Studies 2, 3a and 3b). In addition to using item-response scales, the authors analyze negative online reviews and participants’ open-ended responses to capture their counterfactual thinking.

Findings

Failed recovery (vs failed delivery) increases counterfactual thinking, which, in turn, increases negative eWOM. These mediating effects of counterfactual thinking are consistent across textual analyses and experimental studies, as well as across different measures of counterfactual thinking. Counterfactual thinking also impacts customer anger in experiments; however, anger alone does not explain the effects of failed recovery on negative eWOM. Counterfactual thinking can be minimized by co-created recovery, especially when it is used proactively.

Practical implications

The findings demonstrate the detrimental effects of counterfactual thinking and offer managerial insights into co-creation as a strategy to minimize customers’ counterfactual thinking. The authors also highlight the importance and ways of tracking counterfactual thinking in digital outlets.

Originality/value

The authors contribute to counterfactual thinking and service recovery research by demonstrating the effects of failed recovery on counterfactual thinking that, in turn, impacts negative eWOM and offering a novel way to measure its expression in online narratives. The authors provide guidance on how to use co-creation in the service recovery process to minimize counterfactual thinking.

Details

European Journal of Marketing, vol. 55 no. 12
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 14 July 2020

Louisa Ha, Chenjie Zhang and Weiwei Jiang

Low response rates in web surveys and the use of different devices in entering web survey responses are the two main challenges to response quality of web surveys. The purpose of…

Abstract

Purpose

Low response rates in web surveys and the use of different devices in entering web survey responses are the two main challenges to response quality of web surveys. The purpose of this study is to compare the effects of using interviewers to recruit participants in computer-assisted self-administered interviews (CASI) vs computer-assisted personal interviews (CAPI) and smartphones vs computers on participation rate and web survey response quality.

Design/methodology/approach

Two field experiments using two similar media use studies on US college students were conducted to compare response quality in different survey modes and response devices.

Findings

Response quality of computer entry was better than smartphone entry in both studies for open-ended and closed-ended question formats. Device effect was only significant on overall completion rate when interviewers were present.

Practical implications

Survey researchers are given guidance how to conduct online surveys using different devices and choice of question format to maximize survey response quality. The benefits and limitations of using an interviewer to recruit participants and smartphones as web survey response devices are discussed.

Social implications

It shows how computer-assisted self-interviews and smartphones can improve response quality and participation for underprivileged groups.

Originality/value

This is the first study to compare response quality in different question formats between CASI, e-mailed delivered online surveys and CAPI. It demonstrates the importance of human factor in creating sense of obligation to improve response quality.

Details

Internet Research, vol. 30 no. 6
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 1 November 2015

Lynn Allyson Kelley and Lee Freeman

Although there is a lack of research on instruction that aims at facilitating students’ use of questioning with peers, many early childhood social studies textbooks and resources…

Abstract

Although there is a lack of research on instruction that aims at facilitating students’ use of questioning with peers, many early childhood social studies textbooks and resources, include activities and lessons recommending students conduct interviews with an explicit assumption that young students are capable of formulating and using questions in the context of an interview. In these instances, no suggestions or ideas are given to teachers regarding instruction that will encourage and facilitate students’ questioning. The purpose of this study was to determine if the levels of social studies interview questions second graders formulate and use can be increased with questioning instruction in terms of quality, which is defined as depth of response, and in terms of quantity. This study generated research hypotheses that could be investigated in future research on instruction aimed at increasing young children’s questioning abilities as demonstrated in social studies.

Details

Social Studies Research and Practice, vol. 10 no. 3
Type: Research Article
ISSN: 1933-5415

Keywords

Book part
Publication date: 1 August 2012

Sukhbir Sandhu

Purpose – This paper reflects on how does the mode, in which we ask questions, affect the responses? It explores the differences between the responses to the same questions…

Abstract

Purpose – This paper reflects on how does the mode, in which we ask questions, affect the responses? It explores the differences between the responses to the same questions obtained through two different modes – depth interviews and self-administered questionnaires (SAQs).

Approach – This paper is based on a series of serendipitous but enlightening insights that were obtained while conducting research that sought to examine the drivers of corporate environmentalism in firms based in Eastern and Western economies. The methodology adopted in the research project involved conducting depth interviews with senior-most managers in business organizations in India (Eastern) and New Zealand (Western). The insights that form the basis for this paper were gained when some managers treated the list of questions in the interview guide as a structured open-ended questionnaire and sent back detailed written responses.

Findings – This paper reports that the written responses obtained through SAQs in this project were different both in form and content; they were staid, reserved, clichéd and aimed at being politically correct. In contrast the responses to the same question asked in the interviews were open and candid admissions. Interview responses stood up to the triangulation tests, while the written responses did not. These differences were particularly evident in the eastern context.

Research implications – While both SAQs and interviews are prone to social desirability bias, this paper suggests that there is a greater opportunity to reduce social desirability bias in interviews. This is especially true if a trained interviewer can convince the participants of the credibility, importance and legitimacy of the study.

Originality/value – This paper contributes in two important ways:1.It addresses the issue of how responses to the same question differ across SAQs and depth interviews in strategy and management research.2.It also examines whether this effect differs across Eastern and Western organizational contexts.

Details

West Meets East: Toward Methodological Exchange
Type: Book
ISBN: 978-1-78190-026-0

Keywords

Article
Publication date: 16 November 2018

Louisa S. Ha and Chenjie Zhang

The purpose of this paper is to examine the effect of smartphones and computers as web survey entry response devices on the quality of responses in different question formats and…

Abstract

Purpose

The purpose of this paper is to examine the effect of smartphones and computers as web survey entry response devices on the quality of responses in different question formats and across different survey invitations delivery modes. The respondents’ preference of device and the response immediacy were also compared.

Design/methodology/approach

Two field experiments were conducted with a cluster sampling and a census of all students in a public university in the USA.

Findings

Device effect on response quality was only found when using computer-aided self-interviews, but not in e-mail delivered web surveys. Even though the computer was the preferred device, but the smartphone’s immediate response was significantly higher than the computer.

Research limitations/implications

The sample was restricted to college students who are more proficient users of smartphones and have high access to computers. But the direct comparison in the two studies using the same population increases the internal validity of the study comparing different web survey delivery modes.

Practical implications

Because of the minor differences in device on response quality, researchers can consider using more smartphones for field work such as computer-aided self-interviews to complement e-mail delivered surveys.

Originality/value

This is the first study that compares the response device effects of computer-aided self-interviews and e-mailed delivered web surveys. Because web surveys are increasingly used and various devices are being used to collect data, how respondents behave in different devices and the strengths and weaknesses of different methods of delivery survey help researchers to improve data quality and develop effective web survey delivery and participant recruitment.

Details

Online Information Review, vol. 43 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 2 September 2014

ManMohan S. Sodhi and Ekaterina Yatskovskaya

The purpose of this paper is to investigate an initial set of formative indicators to measure the level of efforts on sustainable use of water by companies from different sectors…

Abstract

Purpose

The purpose of this paper is to investigate an initial set of formative indicators to measure the level of efforts on sustainable use of water by companies from different sectors to eventually generate an index with a ranking of such companies.

Design/methodology/approach

The authors started with unstructured data from an open-ended survey conducted by the Carbon Disclosure Project (CDP) on over 300 global companies. Using data from 158 of the companies in that survey from 27 different two-digit UK SIC codes, the authors devised the indicators, translated these into questions requiring response on a seven-point Likert scale, and then coded the companies’ response in the CDP survey for the questionnaire.

Findings

First, all the questions were valid in that responses could be provided. Second, in open-ended surveys like CDP's survey, companies provided information only on selected dimensions and not on others. Third, across sectors, companies are putting more effort on usage efficiency relative to where the water comes from or where it goes after use.

Research limitations/implications

The questions still require field-testing for validation and user acceptance.

Practical implications

The proposed questions could become part of a survey for companies to self-assess or to disclose information on the sustainable use of water. An index created using disclosed data would motivate companies to make more effort towards sustainable use of water.

Originality/value

The authors believe this to be the first effort towards formulating a sustainability index of companies’ use of water.

Details

International Journal of Productivity and Performance Management, vol. 63 no. 7
Type: Research Article
ISSN: 1741-0401

Keywords

1 – 10 of over 18000