Development and validation of a scale for measuring hospital service quality: a dyadic approach

Raghav Upadhyai (Doon Business School, Dehradun, India)
Neha Upadhyai (Department of Community Medicine, SGRRIM&HS, Dehradun, India)
Arvind Kumar Jain (School of Business, University of Petroleum and Energy Studies, Dehradun, India)
Gaurav Chopra (School of Management, IMS Unison University, Dehradun, India)
Hiranmoy Roy (School of Business, University of Petroleum and Energy Studies, Dehradun, India)
Vimal Pant (NIFTEM, Kundli, India)

Journal of Health Research

ISSN: 2586-940X

Article publication date: 22 March 2021

Issue publication date: 27 April 2022

2126

Abstract

Purpose

This study integrates the providers' perspective as well as the patient's perspective in developing and validating a scale to measure hospital service quality in multispecialty hospitals.

Design/methodology/approach

An exploratory sequential mixed-method approach was used in this study. The strategies used included a thematic literature review, semi-structured interviews, modified Delphi and confirmatory factor analysis.

Findings

The reliability coefficient of 41 item scale was 0.963 with each attribute, that is, pivotal, core and peripheral, having a Cronbach's alpha of 0.907, 0.91 and 0.891, with scale content validity (S-CVI Ave) of 0.9151. The composite reliability scores of all constructs were greater than 0.7, with an Average Variance Explained (AVE) of all items greater than 0.5.

Originality/value

The instrument can be used to measure the difference between what service providers believe customers expect and customers’ actual needs and expectations. The scale can be used to measure the difference between what is delivered (as perceived by the provider) and what customers perceive they have received (because they are unable to accurately evaluate service quality). The dyadic approach of administering this questionnaire in measuring hospital service quality will lead to the identification of a knowledge gap and a perception gap in delivering hospital service quality.

Keywords

Citation

Upadhyai, R., Upadhyai, N., Jain, A.K., Chopra, G., Roy, H. and Pant, V. (2022), "Development and validation of a scale for measuring hospital service quality: a dyadic approach", Journal of Health Research, Vol. 36 No. 3, pp. 473-483. https://doi.org/10.1108/JHR-08-2020-0329

Publisher

:

Emerald Publishing Limited

Copyright © 2021, Raghav Upadhyai, Neha Upadhyai, Arvind Kumar Jain, Gaurav Chopra, Hiranmoy Roy and Vimal Pant

License

Published in Journal of Health Research. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

In the past decade, much of the hospital service quality (HSQ) research has focused on managing customer expectations and perceptions [1]. HSQ is usually evaluated as the gap between health care seekers' expectations and their perception of performance. The dimensions on which HSQ is measured vary from being unidimensional to as large as having ten dimensions [2]. SERVQUAL is a widely used instrument to measure the service quality gap between customer expectations and perceptions on five dimensions [3]. Some authors proposed the alternative of measuring the customer's perceptions alone on these dimensions is sufficient to judge service quality SERVPERF [4]. Besides these two, several other variations are available to measure HSQ in various health care settings [3], but all with a user-centric approach.

In health care services, the user-centric view of service quality may be prejudiced, with information asymmetry being one of them. Consequently, patients/attendants are left with no option but to believe in what they have been informed of or delivered. The patient/attendant is considered as a layman in evaluating the fundamental medical/clinical care [2]. HSQ evaluations, therefore, tend to be biased toward a process of delivery and physical settings of hospitals that a patient/attendant can easily evaluate. Unlike other services, patients are in a state of physical or psychological discomfort [5] and are likely to see service quality differently from what is seen by service providers [6]. Despite health care seekers' disposition in measuring functional aspects of care, providers of the care believe that an acceptable level of technical quality should precede it [7]. Therefore, from the providers' perspective, HSQ may vary based on the knowledge and professional effort applied by them [8]. Further, physical and emotional job-related stress may cause service quality to vary [9]. The inherent inseparability of service provider and seeker of care in professional service like health care calls for a dyadic view instead of taking either seekers' or providers' perspective of HSQ [10].

Hospital service quality literature seemingly sheds little light on relevant questions: [1] What could be a possible way to evaluate service quality considering the dyadic nature of professional exchange in the service relationship? [2] Which dimensions are reflective of these dyadic exchanges? The dyadic perspective of measuring hospital service quality will pave a new way for not only assessing the service quality gap between customer expectations and perceptions but also measuring knowledge and perception gap [11] benefiting seekers, practitioners, hospital managers and administrators. Measuring perceptions of service seekers and providers can improve relationships, job satisfaction and performance in the health care delivery process. The dearth of HSQ measurement scales, incorporating both the participant's perspectives beginning from item creation to final development and validation, adds novelty to our study.

Adopting mixed-method research, a literature search resulted in the identification of eighteen dimensions of hospital service quality. The PCP model [12] helped in classifying these dimensions under pivotal (end product or outcome), core people. Process and organizational structure and peripheral attributes (incidental extras or frills around the service encounters) serve as priority themes for conducting interview rounds with health care service seekers and providers. The template analysis [13] of the health care seekers' and providers' interview-based textual data generated an item pool of 107 unique statements. The statements are evaluated for content validity using a modified Delphi approach from an authoritative panel. The scale is refined and tested for its reliability and validity using confirmatory factor analysis. The final forty-one-item scale incorporates thirteen dimensions of hospital service quality from both, health care seekers and providers.

Methodology

Phase 1: item generation process

An exemplary review of articles related to hospital service quality was conducted during Jan-March, 2018. A total of sixty-three articles published in thirty-four journals available to the authors were reviewed to identify the determinants of hospital service quality. This led to the identification of priority themes for the subsequent round of interviews.

Patients and their attendants who had visited any multispecialty hospital in the previous year were approached using snowball sampling. Eleven women and ten men participated in the survey; they were in the age group of twenty-five to sixty- two years. Semi-structured interviews were conducted with them during the period June-September 2018. During the same period, fifteen doctors, nine nursing and para-medical staff and three hospital administrators/managers were also interviewed who were working in three multispecialty hospitals. The respondents were approached using snowball sampling and the sample constituted sixteen women and eleven men in the age group of twenty-five to fifty-one years. The template analysis technique [13] was used to analyze the qualitative data generated through the interviews.

Phase 2: Modified Delphi process

Expert selection

As the dimension of hospital service quality was identified previously, round one of the “Classical Delphi” became redundant in our study and called for the use of a “Modified Delphi” with a heterogeneous panel. Twenty-six panelists were approached using purposive sampling for participation in the survey, and the purpose and design of the study were explained to them. Informed consent was taken from the panelist and their anonymity was maintained during the entire survey.

Data collection

Twenty-six panelists were invited to participate in the survey between August and October 2019. All consented to participate in the survey. A paper survey was designed, and each respondent was briefed on how to fill the survey questionnaire. After one reminder, twenty-three panelists returned the questionnaire, and three could not participate due to other engagements. The authoritative coefficient was used to establish the credibility (Cr) of the panel members [14]. This was determined by two factors: (Ca) the judgment criterion of the indicator and the familiarity with the indicator (Cs). A value of Cr greater than 0.7 was considered as the acceptable level.

In round 1, panelists were invited to provide a rating on a 5-point Likert scale suited for the surveys where the purpose was to measure the level of agreement. The panelists were required to rate their degree of agreement with the items in the survey on an ordinal scale of strongly disagree to strongly agree. The median rating was calculated for each item. All the items were re-presented to the panelists in round two for reviewing their ratings concerning the median rating of the group computed in round one.

The items with a median rating of 4 or more were assumed to initially qualify for being accepted as the item measuring hospital service quality. The panelists were also asked to rate the relevance of the items in the instrument of the decisive ordinal scale of 1 to 4 [where 1 is not relevant, 2 = somewhat relevant, 3 = quite relevant and 4 being highly relevant]. The ratings of 3 and 4 were considered content valid for items in the instrument. To ensure the stability of responses, multi-rater kappa coefficient for the degree of agreement beyond chance was calculated [15]. Thus, items with a median rating of 4 and above and I-CVI values above 0.79 were retained.

Phase 3: scale refinement and validation

A close-ended self-administered questionnaire was used to collect information from caregivers, bearing items arrived at from the previous Delphi round. The data were collected online and offline using convenience sampling to avoid common method bias. The psychometric properties of the proposed instrument to measure hospital service quality were tested using confirmatory factor analysis [16]. Cronbach's alpha (>0.7), composite reliability (>0.7) and unidimensionality through average variance explained (>0.5) was checked as per the guidelines [17]. The schema of the research process is shown in Figure 1.

Ethical considerations

The study proposal and protocols were approved by the Chairman of Faculty Research Committee of University of Petroleum Studies, Dehradun, India, August 6, 2016, ref no. UPES/Ph.D/FRC-5-6 Aug'16/2016/19.

Results

Initial construction of item pool

Thematic literature search and semi-structured interviews resulted in an identification pool of statements. Statements were reflective of either view given by the respondents recorded during the interviews or statements used by previous researchers related to the measurement of health care service quality. Using template analysis [13], the items were classified under three attributes having fourteen different dimensions, namely (1) pivotal [end product or outcome] with diagnosis and treatment, medical infrastructure, needs management, patient safety, privacy, professional knowledge skills and competence. (2) core [people, process and organizational structure] admission, discharge, medical communication, personal behavior and process; (3) and peripheral attributes [incidental extras or frills around service encounters] amenities and physical infrastructure, charges and payment arrangement, image, quality room and food [12]. After removing the redundancy and similar meaning statements, the final item pool of 107 statements was prepared for the Delphi round.

Delphi round

Twenty-three panelists participated in the first round (88%) and the second round (100%) of the survey. The mean authoritative coefficient value Cr was 0.79 (SD = 0.06) which was found to be good (Table 1). Of the total 107 statements presented to the panelists, only twenty-two items met the consensus criteria, that is, items having a median rating of greater than or equal to 4 and item-content validity (I-CVI) greater than 0.79. The individual ratings were aggregated and summarized, and every panelist was re-presented with a survey containing their individual ratings on all the statements and the aggregated rating. Panelists were given a chance to revisit their level of agreement with each statement in light of the group response. The second round of Delphi resulted in the retention of forty-nine statements achieving consensus fulfilling both the criteria of a median rating greater than or equal to 4 and I-CVI (Ave) greater than 0.79 (Range 0.8261 to 1).

Scale-content validity (S-CVI/Ave) was calculated for the retained items after the second round of Delphi. The S-CVI/Ave value of 0.9095 (SD = 0.0531) was achieved which was above 0.9 showing high excellent content validity [18]. Fleiss's kappa was used to calculate inter-rater reliability. The k value ranged from −1 to +,1 with positive values indicating substantial agreement between the raters. Fleiss's kappa value of 0.63 indicated substantial inter-rater reliability [19]. The p-value of less than 0.5 indicated that the agreement between the raters was significantly better than that would have been achieved by chance.

Tests of scale refinement and validation

The online and offline survey resulted in the collection of 403 responses (288 online, 115 offline). Ten questionnaires were rendered non-usable due to missing information. Six samples were considered outliers as the observations had a unique combination of values across variables [17] resulting in 387 usable responses. The values of different absolute, relative and non-centrality-based fit indices are shown in Table 2 surpassed the recommended threshold values of all the dimensions in the three attributes. Composite reliability (CR) of all dimensions in the final 41 items scale was above 0.7 [17], establishing the construct reliability with a minor deviation in the charges and payment construct (CPA). AVE of all the constructs was greater than 0.50, indicating good convergent validity as shown in Table 3. The widely used measure for reliability coefficient Cronbach's alpha of the complete scale with 41 items was found to be 0.963 (>0.7). The 15 items of pivotal attribute, 14 items of core attribute and 12 items of peripheral attributes had a reliability of 0.907, 0.910 and 0.891, respectively, surpassing the threshold limit.

Items in questions 34 and 49 were retained in the final questionnaire due to high I-CVI values in the previous Delphi round despite having low AVE in the model. The modification indices of question 31 were very high with the process. When we shifted this item from personal behavior (PB) to process (PROC), the CFI increased from 0.924 to 0.955, and CR of process construct also improved. All the items which were meeting the recommended limits were kept in the final CFA model in alignment with the Delphi method, and the remaining items Q1, 2, 3, 8, 11, 32, 43 and 48, which did not contribute significantly to the model, were removed from the final questionnaire. The S-CVI/Ave of the 41-item scale improved to 0.9151 from the S-CVI/Ave value of 0.9095 after CFA rounds, indicating better content validity (Table 4).

Discussion

The scale distinguishes itself from the other scales for measuring HSQ by incorporating a dyadic approach to service encounters, use of thorough scale development processing and an authoritative panel. One of the previously developed scales using the dyadic approach used items in the instrument only based on face and content validity established in discussion with experts [16]. Another existing scale in the development stage planned to conduct in-depth interviews but ultimately resorted only to modified Delphi due to constraints and further lacks in the establishment of the authority of the heterogeneous panel in Delphi [20]. Other contemporary scales developed so far for measuring hospital service quality have borrowed most of the items in the scale only from the literature [5, 16] with a little effort, recognizing the fact that the health care needs of developing countries are different from others [21].

The thirteen dimensions of HSQ have linkages with the five-dimensional construct of the SERVQUAL scale. Diagnosis and treatment, professional skills and competence of service providers and medical communication add to the reliability of the service. The process construct depicts the responsiveness of the service provider. Patient safety and privacy, personal behavior and charges and payments provide assurance to the customers. Caring individual attention showing empathy was indicated by the need management and discharge construct in our scale. The medical infrastructure, amenities and physical infrastructure and quality of room and food added up to tangibility. However, the authors recommend that the proposed service quality should be classified under the PCP model of service quality because of their better linkages with it.

The authors propose a dyadic approach in using this questionnaire to measure HSQ. The scale can be used to measure the knowledge gap, that is, difference between what service providers believe customers expect and customer's actual needs and expectations. Further, the scale can be used to measure the perception gap, that is, difference between what is delivered [as perceived by the provider] and what customers perceive they have received. The dyadic approach of administering this questionnaire in measuring hospital service quality will lead to not only a measurement of the service gap but also the identification of knowledge and perception gaps in HSQ [10, 11].

The identified knowledge gap and perception gap will help in building better service design while the perception gap will help in bridging the gap in service performance. Service quality managers and hospital administrators can benefit from the use of this questionnaire to accurately measure service quality and improve upon it, leading to increased profitability. Caregivers will be able to improve performance quality that will be valued by customers, and providers are likely to see higher treatment compliance.

Conclusion

Most multispecialty hospitals across the world administer internally developed scales for collecting patient feedback or using customer surveys conducted by private and public agencies. These patient feedbacks are considered analogous to service quality evaluations. Such scales are user-centric and only deal with demand-side perspectives of service quality. From the supply side, caregivers' perspectives are completely omitted in such surveys, which might affect service quality evaluations. The available instruments in hospital service quality literature only consider customers' standpoints for such an evaluation. There is a need to incorporate health care seekers' perspectives in service quality evaluation as they experience and evaluate service quality differently. The dearth of such a scale that incorporates both the demand- and supply-side viewpoint for measuring hospital service quality led to the development of this scale.

The items have been framed using firsthand qualitative and quantitative information generated throughout the scale development process. The scale items are generated through rigorous methodology, but need empirical testing in different health care settings. The authors suggest that additions in the scale items may be made as per the context and country, ensuring the validity and reliability of constructs. The approach of administering the questionnaire for measuring the knowledge and perception gap will result in deeper insights in understanding HSQ. Nonetheless, the items identified will help scholars, academicians and health care professionals to design, refine and modify the measures for evaluating hospital service quality.

Conflict of Interest: None

Figures

Schema of research process

Figure 1

Schema of research process

Modified Delphi panelist profile

Health care practitionersPatientsAcademicians
Number of panelists (N = 23)10103
Age (yrs)
20–3031
31–40553
41–5024
Educational qualification
Diploma1
Graduate32
Masters62
Doctorate 63
Avg work experience8.4 yrs (SD = 5.44)***13 yrs (SD = 7.22)
Recency of Hospital visit (months)
<3***4***
4–6***4***
>6***2***
Authoritative coefficient0.8040.6880.8

Goodness of fit indices

Fit indexLimit*Values in
Pivotal attributesCore attributesPeripheral attributes
No. of items before CFANo. of items after CFANo. of items before CFANo. of items after CFANo. of items before CFANo. of items after CFA
15 items15 items20 items14 items14 items12 items
Absolute fit indices
χ2 191.673191.673641.638188.699237.964111.209
df 7979160717148
p value>0.05000000
χ2 / df1.00-5.002.4262.4264.012.6583.3522.317
RMR<0.080.0590.0590.0860.0520.0780.053
GFI>0.900.9390.9390.860.9350.9230.955
AGFI>0.800.9070.9070.8170.9040.8850.926
Relative fit indices
NFI>0.800.9360.9360.8340.9310.9160.955
PNFI>0.500.7040.7040.7030.7260.7150.694
IFI>0.900.9610.9610.870.9560.9390.974
TLI>0.900.9480.9480.8450.9430.9220.964
Non-centrality-based indices
CFI>0.900.9610.9610.8690.9550.9390.974
PGFI>0.500.6180.6180.6550.7220.6240.588
RMSEA<0.080.0610.0610.0880.0660.0780.058

Note(s): *[χ2 / df, RMR, GFI, AGFI, NFI, PNFI, IFI, TLI, CFI, PGFI, RMSEA [17]

Convergent validity parameters

AttributesConstructItemsFactor loading [Above 0.5]Composite reliabilityAVE
[Above 0.7][above 0.5]
PivotalDTQ120.800.7850.553
Q130.81
Q140.60
MIQ240.760.7640.521
Q250.64
Q260.76
NMQ40.760.7040.544
Q50.71
PSPQ420.680.8640.616
Q440.87
Q450.84
Q410.74
PKSCQ270.720.7890.555
Q350.75
Q360.77
CoreDISQ150.740.7930.541
Q160.76
Q170.74
MCQ370.770.8700.626
Q380.86
Q390.75
Q400.79
PBQ60.750.7330.578
Q70.77
PROCQ310.680.8540.541
Q460.84
Q470.80
Q490.67
PeripheralAPIQ340.670.8000.575
Q180.62
Q190.86
Q200.77
CPAQ90.760.6820.518
Q100.68
IMGQ280.840.8580.602
Q290.80
Q300.79
Q330.67
QRFQ210.870.8740.698
Q220.84
Q230.80

Content validation index scores of final questionnaire

Attribute / Dimension and item codeItemI-CVI
A.1 Pivotal: Diagnosis and treatment [DT]
Q12Doctor[s] diagnose the disease correctly0.8696
Q13Doctor[s] starts the treatment in time0.8261
Q14Doctor[s] recommend timely investigations0.9565
A.2 Pivotal: Medical infrastructure [MI]
Q24Hospital has in-house medical laboratories and diagnostic facilities1.0000
Q25Hospital has an in-house pharmacy0.9130
Q26Hospital has modern / latest medical equipment and instruments0.9130
A.3 Pivotal: Need management [NM]
Q4Doctor[s] are available in the hospital whenever needed0.8261
Q5Doctor[s] are available in the hospital0.8696
A.4 Pivotal: Patient safety and privacy [PSP]
Q41Hospital ensures physical privacy for the patient0.9565
Q42Hospital ensures that the patient information is kept private1.0000
Q44Doctor[s] and nursing staff follow hygiene during the process of care0.9130
Q45Hospital minimizes the chance of hospital acquired infections and injuries to patients0.8261
A.5 Pivotal: Professional knowledge, skills and competence [PKSC]
Q27Doctor[s] has/have reasonable experience in dealing with patient's medical condition0.9565
Q35Doctor[s] has/have professional knowledge, skills, and competence0.9565
Q36Nursing and para-medical staff have professional knowledge, skills, and competence0.9565
B.1 Core: Discharge [DIS]
Q15Hospital inform Dos and Donts to patients/attendants at the time of discharge0.8261
Q16At the time of discharge, hospital provides proper prescription which patient/attendant can understand0.9565
Q17Hospital informs follow-up date at the time of discharge0.9565
B.2 Core: Medical communication [MC]
Q37Doctor[s] explain the possible complication[s]/side effect[s] of treatment to patient/attendant0.9130
Q38Doctor[s] explain the time to get a good outcome of treatment to patient/attendant0.8696
Q39Doctor[s] communicate the real condition to the patient/attendant0.9565
Q40Doctor[s] explain the disease and its treatment to the patient/attendant0.9565
B.3 Core: Personal behavior [PB]
Q6Doctor[s] and nursing staff behavior builds trust [belief and faith] in patient/attendant0.9565
Q7Doctor[s] provide hope to the patient/attendant0.9565
B.4 Core: Process [PROC]
Q31Nursing staff and attendant[s] show professional integrity towards their work0.9565
Q46Hospital conducts timely medical investigations0.9565
Q47Hospital generates timely investigation reports0.9565
Q49Patient is given immediate medical attention whenever needed0.9130
C.1 Peripheral: Amenities and physical infrastructure [API]
Q18Amenities and physical infrastructure provide a sense of comfort to the patients0.8261
Q19Amenities and physical infrastructure at the hospital are clean0.9565
Q20Hospital uses disinfectants for cleanliness0.9565
Q34Hospital has proper waste disposal facility/process0.9565
C.2 Peripheral: Charges and payment arrangement [CPA]
Q10Hospital ensures transparency in billing process0.8261
Q9Hospital ensures convenient billing and payment process0.8261
C.3 Peripheral: Image [IMG]
Q28Hospital has fairly good experience handling operative cases0.9130
Q29Hospital has good success rate in treating patients0.8696
Q30Hospital has renowned doctors on its panel0.8261
Q33Personnel at the hospital are neat in appearance0.9565
C.4 Peripheral: Quality of room and food [QRF]
Q21Hospital has decent quality rooms0.8696
Q22Hospital rooms are well ventilated0.9130
Q23Hospital uses clean bed sheets0.9565

References

1.Upadhyai R, Jain AK, Roy H, Pant V. A review of healthcare service quality dimensions and their measurement. J Health Manag. 2019; 21(1): 102-27. doi: 10.1177/0972063418822583.

2.Upadhyai R, Jain AK, Roy H, Pant V. Participants' perspectives on healthcare service quality in multispecialty hospitals: a qualitative approach. J Health Manag. 2020; 22(3): 446-65. doi: 10.1177/0972063420938471.

3.Upadhyai R, Upadhyai N, Jain AK, Roy H, Pant V. Health care service quality: a journey so far. Benchmarking. 2020; 27(6): 1893-927. doi: 10.1108/Bij-03-2019-0140.

4.Jain SK, Gupta G. Measuring service quality: SERVQUAL vs. SERVPERF scales. Vikalpa. 2004; 29(2): 25-37.

5.Duggirala M, Rajendran C, Anantharaman RN. Patient‐perceived dimensions of total quality service in healthcare. Benchmarking. 2008; 15(5): 560-83. doi: 10.1108/14635770810903150.

6.Ramsaran-Fowdar RR. The relative importance of service dimensions in a healthcare setting. Int J Health Care Qual Assur. 2008; 21(1): 104-24. doi: 10.1108/09526860810841192.

7.Gronroos C. A service quality model and its marketing implications. Eur J Mark. 1984; 18(4): 36-44.

8.Das J, Hammer J. Quality of primary care in low-income countries: facts and economics. Annu Rev Econ. 2014; 6: 525-53. doi: 10.1146/annurev-economics-080213-041350.

9.Berry LL, Bendapudi N. Health care: a fertile field for service research. J Serv Res. 2007; 10(2): 111-22. doi: 10.1177/1094670507306682.

10.Brown SW, Swartz TA. A gap analysis of professional service quality. J Mark. 1989; 53(2): 92-8.

11.Lovelock CH, Wirtz J. Services marketing: people, technology, strategy. 7th ed. Upper Saddle River, New Jerssey, NJ: Pearson; 2011.

12.Philip G, Hazlett SA. The measurement of service quality: a new P-C-Pattributes mod. Int J Qual Reliab Manag. 1997; 14(3): 260-86.

13.Brooks J, King N. Qualitative psychology in the real world: the utility of template analysis. Paper present at: 2012 British Psychological Society Annual Conference; 2012 April 18–20; London, United Kingdom.

14.Zhao ZG, Cheng JQ, Xu SL, Hou WL, Richardus JH. A quality assessment index framework for public health services: a Delphi study. Publ. Health. 2015; 129(1): 43-51. doi: 10.1016/j.puhe.2014.10.016.

15.Zamanzadeh V, Ghahramanian A, Rassouli M, Abbaszadeh A, Alavi-Majd H, Nikanfar AR. Design and implementation content validity study: development of an instrument for measuring patient-centered communication. J Caring Sci. 2015; 4(2): 165-78. doi: 10.15171/jcs.2015.017.

16.Chahal H, Kumari N. Development of multidimensional scale for healthcare service quality (HCSQ) in Indian context. J Indian Bus Res. 2010; 2(4): 230-55. doi: 10.1108/17554191011084157.

17.Hair JF, Black WC, Babin BJ, Anderson RE, Tatham RL. Multivariate data analysis. 6th ed. Upper Saddle River, New Jerssey, NJ: Pearson Prentice Hall; 2006.

18.Polit DF, Beck CT. The content validity index: are you sure you know what's being reported? Critique and recommendations. Res Nurs Health. 2006; 29(5): 489-97. doi: 10.1002/nur.20147.

19.McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012; 22(3): 276-82.

20.Aagja JP, Garg R. Measuring perceived service quality for public hospitals (PubHosQual) in the Indian context. Int J Pharm Healthc Mark. 2010; 4(1): 60-83. doi: 10.1108/17506121011036033.

21.Thawesaengskulthai N, Wongrukmit P, Dahlgaard JJ. Hospital service quality measurement models: patients from Asia, Europe, Australia and America. Total Qual Manag Bus Excell. 2015; 26(9–10): 1029-41. doi: 10.1080/14783363.2015.1068596.

Acknowledgements

The author(s) received no financial support for the research, authorship and/or publication of this article.

Corresponding author

Raghav Upadhyai can be contacted at: raghav.upadhyai@gmail.com

Related articles