Keywords
Citation
Ole Pors, N. (2000), "Sampling of respondents and SERVQUAL studies", The Bottom Line, Vol. 13 No. 2. https://doi.org/10.1108/bl.2000.17013baf.001
Publisher
:Emerald Group Publishing Limited
Copyright © 2000, MCB UP Limited
Sampling of respondents and SERVQUAL studies
Keywords Statistical forecasting, Libraries, User studies, Statistics
Introduction
There has not been an abundance of surveys following the so-called SERVQUAL methodology in library and information science. A couple of studies have been undertaken specifically looking at reference service quality. Nitecki (1997) made an inventory and an analysis of SERVQUAL studies conducted in libraries in the 1990s. I will not go into the theory and methodology behind SERVQUAL studies. I will simply discuss some pertinent sampling problems in relation to these kinds of studies. These methodological problems appear to have been somewhat overlooked.
A SERVQUAL study will normally investigate the relationship between users' perception of the quality of a service and their experience of the quality of the given service. It is done by the means of a questionnaire.
A SERVQUAL study then forms part of a whole range of user or customer oriented studies employed in the libraries. They are an important part in collecting data about services, perceptions and changing demographics. They are rather expensive to conduct in a satisfactory and scientifically sound way. One of the reasons is that possible flaws in the research design have importance also in a financial relation to how much money can be allocated to the research project. It is important to note that there is a substantial amount of literature that has discussed a long list of possible flaws in the SERVQUAL methodology and the assumptions behind it.
What is a SERVQUAL study?
A very short and somewhat rude definition is that it is a methodology to measure the gap between customers' expectations and their perceptions or experiences with aspects of a given service. You would normally measure in relation to five dimensions: reliability, assurance, tangibles, empathy and responsiveness.
Part one: the researcher would normally formulate 20-30 statements about aspects of the service distributed over the five dimensions. These statements are formulated in relation to expectations, for example: staff members will always be courteous and polite.
Part two: the same 20-30 statements are then formulated in a second part of the questionnaire. They are now slightly reformulated because they relate now to perceptions or experiences of a given aspect of a service, for example: the staff members were courteous and polite. It is normal to answer to the statements on a seven-point Likert-scale.
Part three: often you will see a third part of the SERVQUAL questionnaire. It will ask the respondents to rank the importance of the five dimensions of service quality distributing in total 100 points.
It is evident that a SERVQUAL study conducted this way is an in-depth study. Due to the goal of measuring gaps between expectations and experiences, the study should give strong indications on where to improve aspects of a given service. I have to say that some researchers have conducted the SERVQUAL study as a broad study. It is normal in the library setting to conduct the SERVQUAL study by means of a mail questionnaire drawing the respondents from some central list of users of a given library. Other more business-oriented SERVQUAL studies have had the customers fill out the questionnaire after they have had a service encounter. This means there is a danger that the actual perception of the service in some way influences the way people answer the questions or statements relating to expectations. It is exactly the question of sampling frame I will discuss briefly in the following paragraphs.
Sampling frames
There are serious drawbacks in relation to a mail survey. Substantial drawbacks are stated in most introductory textbooks in research methodology and they will not be repeated in detail here. Still it is important to note problems concerning:
- •
understanding of statements;
- •
response rate; and
- •
knowledge about who really answers the questionnaire.
More importantly in this context, is the way people respond to the questionnaire. The instrument attempts to determine to what degree respondents compare their score on the expectation section in relation to the perception section of the survey instrument. It is also the question about use frequency and how they are able to recollect their last visit or how confident they really feel about their ability to generalise perceptions of a given service or of the services overall.
The most serious drawback of this sampling frame is the possibility that answers to statements in one part of the questionnaire direct the answers to the similar topic in the other part. This together with some of more serious discussion of the theoretical difficulties about defining expectations, perceptions and the gap between them have given rise to considerations of the validity of some SERVQUAL studies.
I would not dare to propose a better way of conducting the collection of questionnaires. But I would like to discuss a more experimental attitude to collecting questionnaires in a library that intends to conduct a SERVQUAL study.
Different sampling methods
In the traditional SERVQUAL analysis one works with paired distributions because the same customer answers statements concerning both expectations and perceptions. In a statistical sense it implies that one can conduct for example a paired t-test on the possible difference of the two means.
I have shortly discussed some of the main problems of the sampling frames employed in some SERVQUAL studies. It would be interesting to conduct an experiment that used different sampling frames and analysed possible differences in outcomes due to the sampling procedure and the situational context in which people answered the statements in the questionnaire.
Table I gives an overview of eight different sampling procedures. There are of course more possible variations. One could, for example, deliver a questionnaire about expectations when customers leave the service point or deliver one about general perceptions when people enter. One could propose that it would be useful to conduct a SERVQUAL study along these lines. It could possibly give useful information about the effect of the given sampling procedure in relation to the results. The basis for the single procedure is that it is based on random sampling. Random sampling conducted properly should ensure a high degree of representativeness.
In the my following literature list, there has been a discussion about the possibility to measure expectations after the service has taken place. In relation to a mail questionnaire, there is further the problem with changing expectations and perceptions through time. The magnitude of this problem is not really known and it can change from setting to setting.
The question is whether it is truly necessary that exactly the same persons answer the two parts of the questionnaire. One could look at it as the difference between a paired t-test and a two-sample t-test. In a paired sample, the sampling fluctuation is reduced considerably. Normally, it would be advisable to employ paired data, if possible, which would provide more leverage. The paired t-test is a bit more precise, but the question is whether the cost for the greater precision involves other methodological flaws that are serious. Personally, I would like to see a library or another service organization experiment with the sampling next time they intend to conduct a SERVQUAL investigation.
In the literature there has been a debate over participants' ability to weight the five dimensions of the SERVQUAL instrument in relation to importance. A sampling frame like the proposed could give some indications about how people evaluate importance in different situations. Judgment of importance can of course take another value when the customer is in direct contact with the services he or she is offered.
Conclusion
Offered here are some thoughts about pertinent sampling problems, in relation to SERVQUAL questionnaire administration. It is important to note that questionnaire administration always will be a trade-off between quality of data and the cost of obtaining these data. If a library has decided to base some of its planning on these kind of information, it seems sensible to employ methods that can open up for scrutiny of the quality of the data.
At the surface, one should believe that an experiment like the proposed would be rather costly. Delivery of questionnaires by persons to customers entering or leaving a library would normally give a very high response rate, which calls for smaller samples.
I would like comments on these ideas about employing different sampling frames. Should you wish to comment, please send your remarks to nop@db-dk.
Niels Ole Pors Associate Professor at The Royal School of Library and Information Science, Denmark
References and further reading
Asubonteng, P., McCleary, K. and Swan, J.E. (1996), "SERVQUAL revisited: a critical review of service quality", The Journal of Services Marketing, Vol. 10 No. 6, pp. 62-79.
Buttle, F. (1996), "SERVQUAL: review, critique, research agenda", European Journal of Marketing, Vol. 30 No. 1, pp. 8-32.
Nitecki, D. (1997), "Assessment of service quality in academic libraries: focus on the applicability of the SERVQUAL", proceedings of the 2nd Northumbrian Conference on Performance Measurement in Library and Information Services, pp. 181-97.
O'Neill, M.A., Palmer, A.J. and Beggs, R. (1998), "The effects of survey timing on perceptions of service quality", Managing Service Quality, Vol. 8 No. 2, pp. 126-32.
Robinson, S. (1999), "Measuring service quality: current thinking and future requirements", Marketing Intelligence & Planning, Vol. 17 No. 1, pp. 21-32.