Abstract
Purpose
The purpose of this article is to investigate the factors that explain the reasons why customers may be willing to use chatbots in Zimbabwe as an e-banking customer service gateway, an area that remains under researched.
Design/methodology/approach
The research study applied a cross-sectional survey of 430 customers from five selected commercial banks conducted in Harare, the capital city of Zimbabwe. Hypotheses were tested using structural equation modelling.
Findings
The research study showed that a counterintuitive intention to use chatbots is directly affected by chatbots' expected performance, the habit of using them and other factors.
Research limitations/implications
To better appreciate the current research concept, there is a need to replicate the same study in other contexts to enhance generalisability.
Practical implications
Chatbots are a trending new technology and are starting to be increasingly adopted by banks and they have to consider that customers need to get used to them.
Originality/value
This study contributes to bridging the knowledge gap as it investigates the factors that explain why bank customers may be willing to use chatbots in five selected commercial Zimbabwean banks. This is a pioneering study in the context of a developing economy such as Zimbabwe.
Keywords
Citation
Nyagadza, B., Muposhi, A., Mazuruse, G., Makoni, T., Chuchu, T., Maziriri, E.T. and Chare, A. (2024), "Prognosticating anthropomorphic chatbots' usage intention as an e-banking customer service gateway: cogitations from Zimbabwe", PSU Research Review, Vol. 8 No. 2, pp. 356-372. https://doi.org/10.1108/PRR-10-2021-0057
Publisher
:Emerald Publishing Limited
Copyright © 2022, Brighton Nyagadza, Asphat Muposhi, Gideon Mazuruse, Tendai Makoni, Tinashe Chuchu, Eugine T. Maziriri and Anyway Chare
License
Published in PSU Research Review. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction and background
Chatbots emerged from artificial intelligence's (AI) scientific research and development, which has its foundation back in 1956, with the first kind of chatbots, ELIZA (Han, 2021) and A.L.I.C.E (Chaves and Gerosa, 2021). Mimicking of customised human language and speech to enhance customer experience (Wang, 2021). The major advantage of using chatbots includes the ability to allow real time communications through platforms such as Facebook Messenger, Slack and WhatsApp, amongst others. The emergence of the Fourth Industrial Revolution has led to the proliferation and intensity in the use of new technologies (Nyagadza et al., 2021), resulting in calls for the development of models in dealing with information, text and services (Valtolina et al., 2020).
The novelty of the current research is in investigating the motivation towards chatbots usage as an e-banking customer service gateway in Zimbabwe, a developing country in the sub-Saharan region in Africa. The contribution of the current study is to showcase what chatbots bring in bi-directional value creation, mutual influence through active customer interaction, engagement and participation (Flavian et al., 2019), upon acceptance by customers as an e-banking service gateway. Further to this, as a result of technology dynamics, chatbots are deemed to boost platform revolution, leading to new banking business ecosystems for customer connections and interactivities (Wang, 2021). Chatbots have the potential to enhance a participatory culture which necessitates customer engagement (Weißensteiner, 2018). Existing literature has not dwelt much on examining whether chatbots are good in banking business communication applications, user efficacy and customer gratification. Prior research has mainly focussed on social and traditional media users (e.g. Melián-González et al., 2021; Sheehan, 2018). The main question that remains unanswered is to understand whether chatbots are to be easily accepted and trusted by the majority of banking customers in Zimbabwe. Moreover, it remains unknown whether chatbots offer the perceived privacy and security cover in the case of e-banking customer service experience. This concern is supported by researchers who argue that trust is a fundamental factor of technology success (Yen and Chiang, 2020). The current study seeks to understand the following research objectives: (1) to predict the factors that influence customers' trust in chatbots, (2) to evaluate salient customer perceptions of chatbots that influence trust in them and (3) to explore the relationship between the customers' trust and their intention to use chatbots as an e-banking customer service gateway.
The remainder of this paper is structured as follows: theory and literature review, hypotheses and research conceptual model development are discussed in the first section, this is followed by a section on methodological delineations, then analysis of results and finally, the conclusions, research implications, limitations and future research directions are presented.
Literature review and model development
The current section presents the relevant theory and literature reviewed in line with the study, hypotheses and research model development.
Theory and modelling
The current study is based on an anthropomorphism philosophy, which has been applied in different studies, for example, Han (2021) and Weißensteiner (2018). The current study adopts the unified theory of adoption and use of technology model 2 (UTAUT2), in line with technology acceptance. The UTAUT2 model, extended by Venkatesh et al. (2012), provides a better explanation than earlier models, and fits the current research study. It depicts behavioural intentions and technology use better than the earlier model(s). Justification for the benefit of using UTAUT2 in the current research study is based on its application in technology adoption, such as mobile applications (Chao, 2019), software (Li and Mao, 2015), social network sites (Herrero and San Martin, 2017), robotics and travel and tourism (Melián-González et al., 2021). In the current study, the UTAUT2 model is applied in investigating chatbots' usage in e-banking customers' service, as shown by the explanation of the following constructs.
Performance expectancy
This construct denotes the extent to which technology usage benefits customers including influencing performance expectation. Evidence from research shows, that the most significant predictor of technology acceptance is performance expectancy (Chung et al., 2018). This expectancy is closely related to technology gratification, which entails the ability of new technology to enhance customer satisfaction (Liu et al., 2016). Chatbots' interactivity and accessibility are essential characteristics of performance expectancy (Sundar and Kim, 2019). Since chatbots are used to assist in e-customer service by banks, it is proposed that:
Chatbots' performance expectancy positively influences customers' trust.
Effort expectancy
Effort expectancy can be viewed as the degree of ease of use of a technology system (Chao, 2019). Basic antecedents of effort expectancy include ease of use and complexity (Cimperman et al., 2016). Effort expectancy is deemed to be a direct determinant of trust in chatbots' usage by banking customers, according to the study undertaken by Khalilzadeh et al. (2017), Hoque and Sorwar (2017) and Šumak and Šorgo (2016). Therefore, it is hypothesised:
Chatbots' effort expectancy positively influences customers' trust.
Social influence
Chatbots have a tremendous social influence or social presence. The social influence represents a sense of sociability, which in e-banking customer service and e-commerce affects the level of trust (Hassanein and Head, 2007) and usage intention, in future (Yen and Chiang, 2020). Customers' trust in chatbots is affected by their social influence or presence. The social influence of chatbots positively impacts consumers' use intention and determines their trust levels (Ben Mimoun et al., 2017). Hence, it is hypothesised that:
Chatbots' social influence positively influences customers' trust.
Hedonic motivation
Gratification associated with chatbots is another essential aspect that influences customers' hedonistic desire and trust when they use them for e-banking transactions. Hedonic motivation is a crucial element that explains why people use technological platforms and AI application software. Some customers can find the act of using chatbots fun, enjoyable and diplomatic ways of killing time (Ben Mimoun et al., 2017). This is due to the motivation for satisfying hedonic and/or psychological needs that people desire (such as socialising, information, entertainment and status) (Li and Mao, 2015). Thus, it is proposed that:
Chatbots' hedonic motivation positively influences customers' trust.
Habitual usage
Chatbots are systems applications that can be habitually used daily by customers (Weißensteiner, 2018) when making e-banking financial transactions (Morosan and DeFranco, 2016). Customers' habit of using chatbots is directly related to their past and present behaviour (Herrero and San Martin, 2017), which affects their trust levels in the chatbots usage intention (Xu, 2014). The utility of chatbots emanates from the ability to conduct several customer interactions simultaneously (Ivanov and Webster, 2017) and communicate with natural language, thus enhancing interactivity with customers (Michiels, 2017). It is hypothesised that:
Chatbots' habitual usage positively influences customers' trust.
Perceived innovativeness
Chatbots perceived innovativeness is directly related to utilitarian gratification, whereby individuals' technology utility needs are known to be information-seeking and/or self-presentation (Papacharissi and Mendelson, 2011). In this study, chatbots' perceived innovativeness is the willingness of customers to try out new technologies. Therefore, it is proposed that:
Chatbots' perceived innovativeness positively influences customers' trust.
Attitude towards self-service technologies (SSTs)
If customers get the rightful experience, they perceive chatbots positively (Price, 2018), and usually, their trust is enhanced if the degree of innovativeness tallies with their expectations (Dreyer, 2016). SSTs, like chatbots, were found to be more acceptable to the millennials than any other age group and the associated trust shapes their attitude (Price, 2018). Hence, experience and trust levels might be affected due to this issue. It is hypothesised that:
Chatbots' perceived innovativeness positively influences attitude towards SSTs.
Attitude towards SSTs positively influences customers' trust.
Inconvenience
Because chatbots may be skilled in imitating human conversations, hackers can capture the information, which may be a security risk for the banking customers (Alalwan et al., 2018). Furthermore, chatbots do not have their own personality or identity or feelings and emotions like people (Carter and Knol, 2019). Banking customers may desire to talk to physical humans rather than chatbots (Walker and Johnson, 2006). There is a possibility that configuration errors may arise and banks' brand image will be damaged (Michels, 2017; Nyagadza, 2019). Such inconveniences lead to phishing confidential information, since chatbots use open Internet protocols (Kar and Haldar, 2016). As a result, we proposed that:
Chatbots' inconveniences negatively influence customers' trust.
Anthropomorphism
Research has proven that people are more likely to engage with technology that gives them experience that depicts human-like features through aesthetic cues driven by anthropomorphism (Han, 2021). This is a similar case for chatbots usage in engaging customers for e-banking services. Furthermore, humanoid chatbots with human voice-based communication technology, the same as robots, influence the customers' trust and enjoyment perception, leading to intention to use the humanoid as their aid (Qui and Benbasat, 2009). Language style and names as cues in chatbots increase their influence on customers' attitudes, satisfaction (Araujo, 2018) and emotional connection to the corporate brand (Nyagadza et al., 2021) which provides the chatbots. Thus, it is proposed that:
Chatbots' anthropomorphism positively influences customers' trust.
Automation
Jobs with higher automation have proven to have higher job insecurity and are associated with poor health (Papacharissi and Mendelson, 2011). Technology, further to this, has been seen as linked to the displacement of people from work. Naturally, customers may have a negative attitude towards the use of chatbots in e-banking services as they are perceived to be predictively going to replace humans (Arenas Gaitán et al., 2015). In line with this, it leads to the following hypothesis:
Belief that automation will replace workers negatively influences customers' trust.
Perceived privacy risk
Perceived risk refers to the possibility that chatbots may reveal customers' personal information to third parties (Cheng and Jiang, 2020). Under normal circumstances, customers are concerned about privacy issues when they do e-banking transactions either via an official website or social media platforms (Nyagadza, 2020), as revealed by a variety of scholars (Sundar and Kim, 2019). Mobile banking, mobile and e-payment systems and smart devices, such as watches have the same issues of concern on the customers' side; this is the same way in banking (Dehghani, 2018). Therefore, we proposed that:
Chatbots' perceived privacy risk negatively influences customers' trust.
Trust and chatbots usage intention
Intention can be defined as the person's subjective chance of acting with an actual behaviour (Cheng and Jiang, 2020). In prior research, trust levels have been operationalised (Alalwan et al., 2018) as the customers' integrity, benevolence, and ability in relation to their perception of chatbots. Furthermore, trust and intention to use the chatbots are connected to the level of loyalty to a given corporate brand of the bank (Nyagadza et al., 2021) and associated satisfaction levels (Weißensteiner, 2018). Basing on this research evidence in literature, we hypothesise that:
Customers' trust in chatbots usage positively influences customers' chatbots usage intentions.
Based on the theoretical and literature review and posited hypotheses, the conceptual model supporting this study is illustrated in Figure 1.
Research methodological delineations
The sample, design of the questionnaire and measures and data collection methods applied in the research are explained in this section. The researchers collected the data from January 2021 to April 2021. The research applied a quantitative research method underpinned by a positivist philosophy.
Design of questionnaire and measures
The study constructs in Table A1 (Appendix) were measured using item scales adapted from literature specifically related to the intention to use chatbots as e-banking customer services' gateway. Performance expectancy can be found in Venkatesh et al. (2012) and Melián-González et al. (2021). Effort expectancy, social influence, hedonic motivations and habitual use can be found in Melián-González et al. (2021). Perceived innovativeness has been developed from Parra-López et al. (2011)). Attitude towards SSTs (Dabholkar and Baggozi, 2002), inconvenience (Alalwan et al., 2018), anthropomorphism (Sheehan, 2018), automation (Melián-González et al., 2021), perceived privacy risk (Cheng and Jiang, 2020), chatbots usage trust (Yen and Chiang, 2020) and chatbots usage intention (Parra-López et al., 2011) were all subjected to examination via confirmatory factor analysis.
Sampling and data collection
The research study applied a cross-sectional survey of 430 e-banking customers conducted in Harare, Mashonaland East province of Zimbabwe. Participation was voluntary and the objectives of the study were explained to the participants before completing the questionnaire. To complete the questionnaire, the respondents took about 20 min on average. The sample profile is presented in Table 1. The majority of the respondents (69.2%) were aged between 20 and 39 years. Most of the respondents (67.2%) had already earned at least a Bachelor’s degree. The majority of the respondents (84.4%) were earning less than $1,500 per month.
Pre-testing and pilot study
A pilot study was conducted on respondents using stratified probability sampling from five banks. These respondents represented the recommended 5% of the research study sample.
Non-response bias test
Armstrong and Overton's test was applied to check the t-tests to make sure there was no bias in the responses.
Ethical considerations
Ethical considerations related to privacy, informed consent, freedom of response, professionalism, integrity, accuracy and values of research have been adhered to by the researchers.
Analysis and results
Sample adequacy and test of normality
The proportion of item's variance explained by the extracted factors (communalities) were all above 3.00, further confirming that each item shared common variance with other items. We also inspected the multivariate normality (through kurtosis and skewness values verification) of our data. The KMO result indicated that the sample size was adequate, while Bartlett's test depicted that there were significant relationships between the variables, leading to factor analysis suitability. Moreover, the normality test was also verified by the values of kurtosis and skewness which indicated the skewness values that ranged from 0.67 to 1.95 and kurtosis values ranged from 1.09 to 1.87.
Reliability analysis
The Cronbach's alpha value ranges between 0.801 and 0.929 (Table 1), indicating that all the observed items are reliable and consistent. Therefore, the obtained measures were all above the acceptable minimum threshold of 0.70 (Schumaker and Lomax, 2016).
Convergent validity
From the results (Table 2), the average variance extracted (AVE) values for performance expectancy (0.711), effort expectancy (0.718), social influence (0.651), hedonic motivation (0.732), habitual use (0.694), perceived innovativeness (0.719), self-service technology (0.656), inconveniences (0.718), anthropomorphism (0.708), automation (0.649), perceived privacy risk (0.681), chatbots usage trust (0.656) and chatbots usage intention (0.657) are shown. The AVE values for convergent validity teats across constructs ranged between 0.649 and 0.732 (p > 0.50), showing that the indicators were assumed to measure the same construct sufficiently. All constructs passed the convergent validity assessment.
Discriminant validity
The results presented inform that the 17 latent constructs, respectively had square roots of AVE: 0.84, 0.81, 0.80, 0.82, 0.81, 0.81, 0.85, 0.82, 0.85, 0.83, 0.85, 0.84, 0.84, 0.85, 0.82, 0.81 and 0.81. The square roots of AVE of the four latent constructs were greater than the inter-construct correlation.
Discussion and implications
Theoretical, practical and future research implications as well as limitations of the study findings are discussed in this section.
Research hypothesis testing
In order to test the structural relationships hypothesised in the research model, structural equation modelling was applied using SmartPLS (shown in Figure 2).
The majority of the paths were statistically significant. Chatbots influence the customers' trust and enjoyment perception, leading to intention to use the humanoid as their aid ((β = 0.467, t = 5.126, p = 0.000) (Qui and Benbasat, 2009). Customers may have a negative attitude towards the use of chatbots in e-banking services as they are perceived to be predictively going to replace humans (
There is motivation for satisfying hedonic and/or psychological needs that people desire (such as socialising, information, entertainment and status) (β = −0.205, t = 2.104, p = 0.036) (Li and Mao, 2015). The results (β = −0.196, t = 1.549, p = 0.122) depict that chatbots do not have their own personality or identity, even feelings and emotions like people (Carter and Knol, 2019), therefore, banking customers may desire to talk to physical humans rather than chatbots (Walker and Johnson, 2006).
Evidence from research shows that the most significant predictor of technology acceptance (for example, chatbots in banking customer service) is performance expectancy (Chung et al., 2018) as unearthed in the current study (β = 0.320, t = 8,132, p = 0.000). Chatbots' perceived innovativeness is directly related to utilitarian gratification (β = −0.383, t = 2.836, p = 0.017), whereby individuals' technology utility needs are known to be information seeking and/or self-presentation (Papacharissi and Mendelson, 2011).
Privacy and security trust (β = −0.053, t = 0.581, p = 0.561) in the chatbots usage in banking is a major issue of concern, especially when dealing with personal information, such as email addresses, cell phone numbers, names or physical addresses. The relationship in the results (β = 0.178, t = 2.456, p = 0.024) implies the psychological connection with users who see the chatbots as close to human contact (Cyr et al., 2007). For this particular study, attitudes can be viewed as an antecedent of behavioural intention (Dreyer, 2016) towards SSTs (β = 0.512, t = 3.305, p = 0.001) (see Table 3).
The bootstrapped confidence interval of HTMT should not contain 0 (Schumaker and Lomax, 2016). The confidence interval also confirms the significance of the paths in the model.
Model fit indices
The results showed that Root Mean Square Error (RMSEA) = 0.075, Comparative Fit Index (CFI) = 1.01 and Goodness of Fit Index (GFI) = 1.05. This indicated that the empirical data fit the developed theoretical model. The goodness of fit value for this study is 0.819 and proves that the developed model is large in explaining the issues of chatbots' usage intention. The predictors have a direct effect towards chatbots' usage trust. The SST R2 value is 0.987, contributed by perceived innovativeness. Moreover, chatbots' usage intention has an R2 value of 0.922. The developed model has substantial explanatory power. The Q2 values for this study model 6 support that the path model's predictive relevance was adequate for the endogenous construct.
Theoretical implications
The current research findings are necessary contributions to the existing body of knowledge and theoretical literature. It gives insights into the factors which influence customers' trust in chatbots, the salient customer perceptions of chatbots that influence trust in them, and the link between the customers' trust and their intention to use chatbots as an e-banking customer service gateway. The study provides empirical support for anthropomorphism as a socio-technological theory, based on the prior and currently undertaken practical research studies. This connects with the nature of human–human social interaction theories and their relationship to human–computer interactions. Due to this, the current research paves the way for additional inquiry on understanding the emerging new dimension of e-banking communication technology.
Practical implications
Security emerged as a key imperative in promoting the adoption of chatbots. It follows then that in order to support “smart” e-banking services in Zimbabwe, technologically-savvy financial institutions may need to revolutionise their approach towards direct digital customer service through chatbots (which are developed via machine learning and computing natural language, often based on AI). Already by early 2021, commercial banks in Zimbabwe, such as Steward Bank (organic digital bank), in 2018 introduced “Batsi” that is connected to Facebook, Square Mobile App and e-banking service platform. The use of chatbots in banking as a customer service gateway provides an advantage to the banks, because customers are able to connect and transact via various social media platforms or channels seamlessly. However, chatbots need to be managed properly as they are connected to global information access networks, which may be vulnerable to information privacy hacking and phishing. Another implication of the current study for the banks and any other financial organisation is that there is a need to deal with the notion of technology replacing human beings. Banks may need to re-strategise their human resources policies and regulations as a counter move to avoid job losses and a complete absence of people.
Study limitations, future research implications and conclusion
The study has limitations which may affect the generalisability of the results, since they can only be applied to the literature area studied. Complementary research studies can be done in other parts of the world to come up with cross-cultural comparisons, as well as methodological validation. Another limitation was the nature of the study (cross-sectional) which does not allow immediate conclusions to be made. In future, longitudinal empirical research study enquiries can be made in order to check different variations of economic situations in other relevant studies and evaluate other relevant theoretical frameworks in anthropomorphic chatbots' usage intention as an e-banking customer service theory.
Figures
Descriptive statistics
Construct | Item | Descriptive statistics | Cronbach Alpha | Result | Communalities | |||
---|---|---|---|---|---|---|---|---|
Mean | SD | Sk | Ku | |||||
Performance expectancy (PE) | PE1 | 4.27 | 1.22 | 0.832 | 1.75 | 0.873 | Reliable | 0.861 |
PE2 | 0.850 | 1.36 | ||||||
PE3 | 0.934 | 1.73 | ||||||
PE4 | 1.35 | 1.76 | ||||||
Effort expectancy (EE) | EE1 | 4.13 | 1.02 | 1.23 | 1.65 | 0.834 | Reliable | 0.857 |
EE2 | 1.74 | 1.79 | ||||||
EE3 | 0.987 | 1.76 | ||||||
Social influence (SI) | SI1 | 4.14 | 1.54 | 0.751 | 1.79 | 0.826 | Reliable | 0.870 |
SI2 | 0.820 | 1.82 | ||||||
SI3 | 1.24 | 1.87 | ||||||
Hedonic motivations (HM) | HM1 | 4.29 | 1.21 | 1.32 | 1.81 | 0.889 | Reliable | 0.835 |
HM2 | 1.65 | 1.87 | ||||||
HM3 | 1.23 | 1.69 | ||||||
Habitual user (HU) | HU1 | 2.56 | 1.16 | 0.89 | 1.76 | 0.815 | Reliable | 0.836 |
HU2 | 1.34 | 1.75 | ||||||
HU3 | 1.25 | 1.72 | ||||||
Perceived innovativeness (PI) | PI1 | 4.19 | 1.20 | 0.956 | 1.61 | 0.801 | Reliable | 0.825 |
PI2 | 0.793 | 1.68 | ||||||
PI3 | 1.34 | 1.65 | ||||||
Self-service technology (SSTA) | SSTA1 | 4.07 | 1.27 | 1.76 | 1.28 | 0.829 | Reliable | 0.901 |
SSTA2 | 1.43 | 1.25 | ||||||
SSTA3 | 1.25 | 1.37 | ||||||
SSTA4 | 1.43 | 1.66 | ||||||
Inconvenience (INC) | INC1 | 2.32 | 1.05 | 0.96 | 1.72 | 0.811 | Reliable | 0.825 |
INC2 | 0.79 | 1.35 | ||||||
INC3 | 0.87 | 1.50 | ||||||
INC4 | 0.96 | 1.23 | ||||||
Anthropomorphism (ANT) | ANT1 | 4.12 | 1.14 | 1.18 | 1.52 | 0.906 | Reliable | 0.956 |
ANT2 | 1.95 | 1.56 | ||||||
ANT3 | 1.67 | 1.27 | ||||||
ANT4 | 1.79 | 1.53 | ||||||
Automation (AUT) | AUT1 | 2.2 | 0.35 | 1.66 | 1.62 | 0.818 | Reliable | 0.820 |
AUT2 | 1.54 | 1.25 | ||||||
AUT3 | 1.13 | 1.09 | ||||||
Perceived privacy risk (PPR) | PPR1 | 4.3 | 0.93 | 0.67 | 1.85 | 0.908 | Reliable | 0.926 |
PPR2 | 0.97 | 1.56 | ||||||
PPR3 | 0.85 | 1.59 | ||||||
Chatbots usage trust (CUT) | CUT1 | 4.24 | 0.96 | 1.69 | 1.47 | 0.821 | Reliable | 0.829 |
CUT2 | 1.56 | 1.58 | ||||||
CUT3 | 1.43 | 1.77 | ||||||
Chatbots usage intention (CUI) | CUI1 | 4.12 | 0.87 | 1.35 | 1.25 | 0.833 | Reliable | 0.841 |
CUI2 | 1.24 | 1.12 | ||||||
CUI3 | 1.39 | 1.47 |
Source(s): Primary data (2021)
Convergent validity
Construct | Item | Factor loading (F) | F2 | 1 | Number of indicators (n) | CR | AV | Result |
---|---|---|---|---|---|---|---|---|
Performance expectancy (PE) | PE1 | 0.854 | 0.729 | 0.271 | 4 | 0.907 | 0.711 | Achieved |
PE2 | 0.827 | 0.684 | 0.316 | |||||
PE3 | 0.811 | 0.658 | 0.342 | |||||
PE4 | 0.878 | 0.771 | 0.229 | |||||
Effort expectancy (EE) | EE1 | 0.853 | 0.728 | 0.272 | 3 | 0.884 | 0.718 | Achieved |
EE2 | 0.809 | 0.654 | 0.346 | |||||
EE3 | 0.879 | 0.773 | 0.227 | |||||
Social influence (SI) | SI1 | 0.788 | 0.621 | 0.379 | 3 | 0.848 | 0.651 | Achieved |
SI2 | 0.774 | 0.600 | 0.400 | |||||
SI3 | 0.856 | 0.733 | 0.267 | |||||
Hedonic motivations (HM) | HM1 | 0.823 | 0.677 | 0.323 | 3 | 0.785 | 0.732 | Achieved |
HM2 | 0.901 | 0.812 | 0.188 | |||||
HM3 | 0.841 | 0.707 | 0.293 | |||||
Habitual use (HU) | HU1 | 0.823 | 0.677 | 0.323 | 3 | 0.871 | 0.694 | Achieved |
HU2 | 0.886 | 0.785 | 0.215 | |||||
HU3 | 0.787 | 0.619 | 0.381 | |||||
Perceived innovativeness (PI) | P1 | 0.885 | 0.783 | 0.217 | 3 | 0.897 | 0.719 | Achieved |
PI2 | 0.823 | 0.677 | 0.223 | |||||
PI3 | 0.834 | 0.696 | 0.304 | |||||
Self-service technology (SSTA) | SSTA1 | 0.789 | 0.623 | 0.377 | 4 | 0.884 | 0.656 | Achieved |
SSTA2 | 0.804 | 0.646 | 0.354 | |||||
SSTA3 | 0.819 | 0.671 | 0.329 | |||||
SSTA4 | 0.826 | 0.682 | 0.318 | |||||
Inconvenience (INC) | INC1 | 0.902 | 0.814 | 0.186 | 4 | 0.910 | 0.718 | Achieved |
INC2 | 0.796 | 0.634 | 0.366 | |||||
INC3 | 0.874 | 0.764 | 0.236 | |||||
INC4 | 0.813 | 0.661 | 0.339 | |||||
Anthropomorphism (ANT) | ANT1 | 0.865 | 0.748 | 0.252 | 4 | 0.907 | 0.708 | Achieved |
ANT2 | 0.845 | 0.714 | 0.286 | |||||
ANT3 | 0.832 | 0.692 | 0.308 | |||||
ANT4 | 0.824 | 0.679 | 0.321 | |||||
Automation (AUT) | AUT1 | 0.827 | 0.684 | 0.316 | 3 | 0.847 | 0.649 | Achieved |
AUT2 | 0.805 | 0.648 | 0.352 | |||||
AUT3 | 0.785 | 0.616 | 0.384 | |||||
Perceived privacy risk (PPR) | PPR1 | 0.870 | 0.757 | 0.243 | 3 | 0.865 | 0.681 | Achieved |
PPR2 | 0.813 | 0.661 | 0.339 | |||||
PPR3 | 0.790 | 0.624 | 0.376 | |||||
Chatbots usage trust (CUT) | CUT1 | 0.818 | 0.669 | 0.331 | 3 | 0.850 | 0.656 | Achieved |
CUT2 | 0.804 | 0.671 | 0.329 | |||||
CUT3 | 0.793 | 0.629 | 0.371 | |||||
Chatbots usage intention (CUI) | CUI1 | 0.804 | 0.646 | 0.354 | 3 | 0.852 | 0.657 | Achieved |
CUI2 | 0.825 | 0.681 | 0.319 | |||||
CUI3 | 0.803 | 0.645 | 0.355 |
Source(s): Primary data (2021)
Hypothesis, path coefficients and results
Path | Path coefficients ( | Confidence intervals | t-value | p-value | Significance level | |
---|---|---|---|---|---|---|
2.5% | 97.5% | |||||
ANT → CUT | 0.467 | 0.083 | 0.257 | 5.126 | 0.000 | Significant |
AUT → CUT | −0.051 | −0.433 | 0.577 | 0.421 | 0.674 | Not significant |
CUT → CUI | 0.960 | 0.322 | 0.649 | 12.567 | 0.000 | Significant |
EE → CUT | −0.198 | 0.064 | 0.868 | 2.214 | 0.031 | Significant |
HM → CUT | −0.205 | 0.311 | 1.098 | 2.104 | 0.036 | Significant |
HU → CUT | 0.621 | 0.015 | 0.630 | 4.964 | 0.000 | Significant |
INC → CUT | −0.196 | −0.146 | 0.619 | 1.549 | 0.122 | Not significant |
PE → CUT | 0.320 | 0.013 | 0.448 | 8.132 | 0.000 | Significant |
PI → CUT | −0.383 | 0.157 | 0.200 | 2.386 | 0.017 | Significant |
PI → SSTA | 0.993 | 0.965 | 0.986 | 11.053 | 0.000 | Significant |
PPR → CUT | −0.053 | −0.025 | 0.561 | 0.581 | 0.561 | Not significant |
SI → CUT | 0.178 | −0.654 | −0.150 | 2.456 | 0.024 | Significant |
SSTA → CUT | 0.512 | 0.014 | 0.336 | 3.305 | 0.001 | Significant |
Source(s): Primary data (2021)
Instrument statements and reliability
Construct | Item | Statement | F | α |
---|---|---|---|---|
Performance expectancy (PE) | PE1 | I find chatbots to be useful | 0.854 | 0.873 |
PE2 | Using chatbots helps me accomplish things quickly | 0.827 | ||
PE3 | Chatbots help solve doubts | 0.811 | ||
PE4 | Using chatbots improves information search | 0.878 | ||
Effort expectancy (EE) | EE1 | Learning how to use chatbots is easy for me | 0.853 | 0.834 |
EE2 | I find chatbots easy to use | 0.809 | ||
EE3 | It is easy for me to become skilful at using chatbots | 0.879 | ||
Social influence (SI) | SI1 | Many people I know use chatbots | 0.788 | 0.826 |
SI2 | People who influence my behaviour use chatbots | 0.774 | ||
SI3 | People whose opinions I value use chatbots | 0.856 | ||
Hedonic motivations (HM) | HM1 | Using chatbots is fun | 0.823 | 0.889 |
HM2 | Using chatbots is enjoyable | 0.901 | ||
HM3 | Using chatbots is very entertaining | 0.841 | ||
Habitual user (HU) | HU1 | The use of chatbots has become a habit for me | 0.823 | 0.815 |
HU2 | Using chatbots has become natural to me | 0.886 | ||
HU3 | I intend to use chatbots | 0.787 | ||
Perceived innovativeness (PI) | PI1 | I find new tools easy to use | 0.885 | 0.801 |
PI2 | I am equipped with technological skills, I like to be updated with all the latest things | 0.823 | ||
PI3 | I am always seeking new ways and new tools | 0.834 | ||
Self-service technology (SSTA) | SSTA1 | I like receiving e-banking customer services via IT | 0.789 | 0.829 |
SSTA2 | I think it is all right to receive e-banking customer services via self-service technologies | 0.804 | ||
SSTA3 | I think receiving e-banking customer services via self-service technologies is good | 0.819 | ||
SSTA4 | Receiving e-banking customer services via self-service technologies is comfortable | 0.826 | ||
Inconvenience (INC) | INC1 | I Think the use of chatbots is inefficient since the chatbots frequently do not understand what l am expressing | 0.902 | 0.811 |
INC2 | I think using chatbots is impractical, since typing is required | 0.796 | ||
INC3 | I think expressing an idea to a chatbot is more complicated than to a human | 0.874 | ||
INC4 | I think that using chatbots is uncomfortable since I am required to express my ideas in a way understandable to the chatbot | 0.813 | ||
Anthropomorphism (ANT) | ANT1 | It is important that the conversation with a chatbot resembles one with a human being | 0.865 | 0.906 |
ANT2 | Conversations with chatbots should be natural | 0.845 | ||
ANT3 | Chatbots seems as if they understand the person with whom they are interacting | 0.832 | ||
ANT4 | Conversation with a chatbot should not be artificial | 0.824 | ||
Automation (AUT) | AUT1 | I think chatbots are going to replace workers | 0.827 | 0.818 |
AUT2 | Jobs that are currently performed by human beings will be performed by chatbots | 0.805 | ||
AUT3 | Firms will use more chatbots and less workers | 0.785 | ||
Perceived privacy risk (PPR) | PPR1 | My information can be used in a way I do foresee (Reverse coded) | 0.870 | 0.908 |
PPR2 | The information I submit can be misused | 0.813 | ||
PPR3 | There is too much uncertainty associated with e-banking customer service through chatbots agents | 0.790 | ||
Chatbots usage trust (CUT) | CUT1 | The conversational e-banking customer service chatbot is trustworthy | 0.818 | 0.821 |
CUT2 | I trust the conversational chatbot virtual agent | 0.804 | ||
CUT3 | The chatbot virtual agent is adequate for my need | 0.793 | ||
Chatbots usage intention (CUI) | CUI1 | When required, I will use chatbots | 0.804 | 0.833 |
CUI2 | I intend to use chatbots in the future | 0.825 | ||
CUI3 | I think that more and more people will use chatbots | 0.803 |
References
Alalwan, A.A., Dwivedi, Y.K., Rana, N.P. and Algharabat, R. (2018), “Examining factors influencing Jordanian customers' intentions and adoption of internet banking: extending UTAUT2 with risk”, Journal of Retailing and Consumer Services, Vol. 40, pp. 125-138.
Araujo, T. (2018), “Living up to the chatbot hype: the influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions”, Computers in Human Behaviour, Vol. 85, pp. 183-189.
Arenas Gaitán, J., Peral Peral, B. and Ramón Jerónimo, M. (2015), “Elderly and internet banking: an application of UTAUT2”, Journal of Internet Banking and Commerce, Vol. 20 No. 1, pp. 1-23.
Ben Mimoun, M.S., Poncin, I. and Garnier, M. (2017), “Animated conversational agents and E-consumer productivity: the roles of agents and individual characteristics”, Information and Management, Vol. 54 No. 5, pp. 545-559.
Carter, E. and Knol, C. (2019), “Chatbots — an organisation's friend or foe?”, Research in Hospitality Management, Vol. 9 No. 2, pp. 113-116.
Chao, C.-M. (2019), “Factors determining the behavioural intention to use mobile learning: an application and extension of the UTAUT model”, Frontiers in Psychology, Vol. 10, pp. 1-14.
Chaves, A.P. and Gerosa, M.A. (2021), “How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design”, International Journal of Human–Computer Interaction, Vol. 37 No. 8, pp. 729-758.
Cheng, Y. and Jiang, H. (2020), “How do AI-driven chatbots impact user experience? Examining gratifications, perceived privacy risk, satisfaction, loyalty, and continued use”, Journal of Broadcasting and Electronic Media, Vol. 64 No. 4, pp. 592-614.
Chung, N., Lee, H., Kim, J.Y. and Koo, C. (2018), “The role of augmented reality for experience-influenced environments: the case of cultural heritage tourism in Korea”, Journal of Travel Research, Vol. 57 No. 5, pp. 627-643.
Cimperman, M., Brenčič, M.M. and Trkman, P. (2016), “Analyzing older users' home telehealth services acceptance behavior—applying an extended UTAUT model”, International Journal of Medical Informatics, Vol. 90, pp. 22-31.
Cyr, D., Hassanein, K., Head, M. and Ivanov, A. (2007), “The role of social presence in establishing loyalty in e-service environments”, Interacting with Computers, Vol. 19 No. 1, pp. 43-56.
Dabholkar, P.A. and Bagozzi, R.P. (2002), “An attitudinal model of technology-based self-service: moderating effects of consumer traits and situational factors”, Journal of the Academy of Marketing Science, Vol. 30 No. 3, pp. 184-201.
Dehghani, M. (2018), “Exploring the motivational factors on continuous usage intention of smartwatches among actual users”, Behaviour and Information Technology, Vol. 37 No. 2, pp. 145-158.
Dreyer, T. (2016), “Survey finds half of consumers would prefer to conduct all customer service via messaging”, available at: https://venturebeat.com/2016/12/12/new-survey-and-insights-about-what-consumers-want-in-a-chatbot/ (accessed April 2021).
Flavian, C., Ibañez-Sanchez, S. and Orús, C. (2019), “The impact of virtual, augmented and mixed reality technologies on the customer experience”, Journal of Business Research, Vol. 100, pp. 547-560.
Han, M.C. (2021), “The impact of anthropomorphism on consumers' purchase decision in chatbot commerce”, Journal of Internet Commerce, Vol. 20 No. 1, pp. 46-65.
Hassanein, K. and Head, M. (2007), “Manipulating perceived social presence through the web interface and its impact on attitude towards online shopping”, International Journal of Human-Computer Studies, Vol. 65 No. 8, pp. 689-708.
Herrero, Á. and San Martín, H. (2017), “Explaining the adoption of social networks sites for sharing user-generated content: a revision of the UTAUT2”, Computers in Human Behaviour, Vol. 71, pp. 209-217.
Hoque, R. and Sorwar, G. (2017), “Understanding factors influencing the adoption of mHealth by the elderly: an extension of the UTAUT model”, International Journal of Medical Informatics, Vol. 101, pp. 75-84.
Ivanov, S. and Webster, C. (2017), “Adoption of robots, artificial intelligence and service automation by travel, tourism and hospitality companies – a cost-benefit analysis”, International Scientific Conference ‘Contemporary Tourism – Traditions and Innovations’, 19-21 October 2017, Sofia University.
Kar, R. and Haldar, R. (2016), “Applying chatbots to the internet of things: opportunities and architectural elements”, International Journal of Advanced Computer Science and Applications, Vol. 7 No. 11, pp. 147-154.
Khalilzadeh, J., Ozturk, A.B. and Bilgihan, A. (2017), “Security-related factors in extended UTAUT model for NFC based mobile payment in the restaurant industry”, Computers in Human Behavior, Vol. 70, pp. 460-474.
Li, M. and Mao, J. (2015), “Hedonic or utilitarian? Exploring the impact of communication style alignment on user's perception of virtual health advisory services”, International Journal of Information Management, Vol. 35 No. 2, pp. 229-243.
Liu, I.L.B., Cheung, C.M.K. and Lee, M.K.O. (2016), “User satisfaction with microblogging: information dissemination versus social networking”, Journal of the Association for Information Science and Technology, Vol. 67 No. 1, pp. 56-70.
Melián-González, S., Gutiérrez-Taño, D. and Bulchand-Gidumal, J. (2021), “Predicting the intentions to use chatbots for travel and tourism”, Current Issues in Tourism, Vol. 24 No. 2, pp. 192-210.
Michiels, E. (2017), “Modelling chatbots with a cognitive system allows for a differentiating user experience”, Doctoral Consortium and Industry Track Papers, Vol. 2027, pp. 70-78.
Morosan, C. and DeFranco, A. (2016), “It's about time: revisiting UTAUT2 to examine consumers' intentions to use NFC mobile payments in hotels”, International Journal of Hospitality Management, Vol. 53, pp. 17-29.
Nyagadza, B. (2019), “Responding to change and customer value improvement: pragmatic advice to banks”, The Marketing Review (TMR), Vol. 19 Nos 3-4, pp. 235-252.
Nyagadza, B. (2020), “Search engine marketing and social media marketing predictive trends”, Journal of Digital and Media Policy (JDMP), Intellect.
Nyagadza, B., Kadembo, E.M. and Makasi, A. (2021), “When corporate brands tell stories: a Signalling theory perspective”, Cogent Psychology, Vol. 8 No. 1, pp. 1-30.
Papacharissi, Z. and Mendelson, A. (2011), “Toward a new(er) sociability: uses, gratifications and social capital on Facebook”, in Papathanassopoulos, S. (Ed.), Media Perspectives for the 21st Century, Routledge, pp. 212-230.
Parra-López, E., Bulchand-Gidumal, J., Gutiérrez-Taño, D. and Díaz-Armas, R. (2011), “Intentions to use social media in organizing and taking vacation trips”, Computers in Human Behaviour, Vol. 27 No. 2, pp. 640-654.
Price, D. (2018), “Yes, chat bots are incredibly efficient. But your customers hate them”, available at: https://www.inc.com/dom-price/yes-chat-bots-are-incredibly-efficient-butyour-customers-hate-them.html (April 2021).
Qiu, L. and Benbasat, I. (2009), “Evaluating anthropomorphic product recommendation agents: a social relationship perspective to designing information systems”, Journal of Management Information Systems, Vol. 25 No. 4, pp. 145-182.
Schumacker, R.E. and Lomax, R.G. (2016), A Beginner's Guide to Structural Equation Modelling, 4th ed., Routledge, New York, NY.
Sheehan, B.T. (2018), “Customer service chatbots: anthropomorphism, adoption and word of mouth”, Doctoral Dissertation, Queensland University of Technology, available at: https://eprints.qut.edu.au/121188/.
Šumak, B. and Šorgo, A. (2016), “The acceptance and use of interactive whiteboards among teachers: differences in UTAUT determinants between pre-and post-adopters”, Computers in Human Behavior, Vol. 64, pp. 602-620.
Sundar, S.S. and Kim, J. (2019), “Machine heuristic: when we trust computers more than humans with our personal information”, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Vol. 538, pp. 1-9.
Valtolina, S., Barricelli, B.R. and Di Gaetano, S. (2020), “Communicability of traditional interfaces vs chatbots in healthcare and smart home domains”, Behaviour and Information Technology, Vol. 39 No. 1, pp. 108-132.
Venkatesh, V., Thong, J.Y. and Xu, X. (2012), “Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology”, MIS Quarterly, Vol. 36 No. 1, pp. 157-178.
Walker, R.H. and Johnson, L.W. (2006), “Why consumers use and do not use technology-enabled services”, Journal of Services Marketing, Vol. 20 No. 2, pp. 125-135.
Wang, C.L. (2021), “New frontiers and future directions in interactive marketing: inaugural Editorial”, Journal of Research in Interactive Marketing, Vol. 15 No. 1, pp. 1-9.
Weißensteiner, A.A.A. (2018), “Chatbots as an approach for a faster enquiry handling process in the service industry”, Master's Dissertation, Modul University, Vienna.
Xu, X. (2014), “Understanding users' continued use of online games: an application of UTAUT2 in social network games”, in Lorenz, P. (Ed.), The Sixth International Conferences on Advances in Multimedia (MMEDIA 2014), IARIA, Nice, pp. 58-65.
Yen, C. and Chiang, M.-M. (2020), “Trust me, if you can: a study on the factors that influence consumers' purchase intention triggered by chatbots based on brain image evidence and self-reported assessments”, Behaviour and Information Technology. doi: 10.1080/0144929X.2020.1743362.
Further reading
Bae, M.-Y. (2018), “Understanding the effect of the discrepancy between sought and obtained gratifications on social networking site users' satisfaction and continuance intention”, Computers in Human Behaviour, Vol. 79, pp. 137-153.
Klein, A. and Sharma, V.M. (2018), “German Millennials' decision-making styles and their intention to participate in online group buying”, Journal of Internet Commerce, Vol. 17 No. 4, pp. 383-417.
Lee, S. and Choi, J. (2017), “Enhancing user experience with conversational agent for movie recommendation: effects of self-disclosure and reciprocity”, International Journal of Human-Computer Studies, Vol. 103, pp. 95-105.
Acknowledgements
The authors thank anonymous respondents who provided data for this study.
Authors contributions: All authors contributed equally in the development of the article.
Funding: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors.
Competing interests: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Consent for publication: All authors consent publication of the article with PSU Research Review (PRR), Emerald Publishing Ltd.
Corresponding author
About the authors
Brighton Nyagadza is a full time lecturer and a chairperson in the Department of Marketing (Digital Marketing) at Marondera University of Agricultural Sciences and Technology (MUAST), Zimbabwe, full member of the Marketers Association of Zimbabwe (MAZ), an associate of The Chartered Institute of Marketing (CIM), the United Kingdom and a power member of the Digital Marketing Institute (DMI), Dublin, Ireland. He has published several book chapters in Routledge Books of Taylor and Francis Publishers, New York (USA), Lexington Books of the Rowan and Littlefield Publishers, Maryland (USA), Langaa Publishers (Cameroon) and in reputable global journals such as Journal of Fashion Marketing and Management (Emerald Insight, UK), Journal of Environmental Media (Intellect, Bristol, UK), Journal of Asian and African Studies (SAGE, London, UK), Journal of Digital Media and Policy (Intellect Publishers, Bristol, UK), Youth and Society (SAGE, London, UK), Cogent Business and Management, Cogent Psychology, Cogent Social Sciences (Taylor and Francis, England and Wales, UK), Communicare (University of Johannesburg, South Africa), The Marketing Review (Westburn Publishers, Scotland), Retail and Marketing Review (UNISA), Africanus (UNISA Press) and Amity Journal of Entrepreneurship (Amity University Press) and others. Currently, he is editing a book on Social Media Marketing Strategy Post COVID-19 Pandemic: Ethics, Challenges and New Directions, to be published by Vernon Press Publishers, Wilmington, Delaware (USA).
Asphat Muposhi is a lecturer in the Marketing Department of Information and Marketing Sciences, Faculty of Business Sciences, Midlands State University (MSU), Gweru, Zimbabwe. He holds a PhD in Marketing Management from the University of Johannesburg, South Africa. His research interests are in sustainable development and green supply chain management.
Gideon Mazuruse is a full time Mathematics and Statistics lecturer under the Teaching and Learning Institute of the Marondera University of Agricultural Sciences and Technology (MUAST), Zimbabwe. He holds MSc in Statistics and Operations Research (NUST), BSc Hons in Statistics and Operations Research (GZU), BSc in Mathematics and Computer Science (GZU) and Post Graduate Diploma in Education (ZOU).
Tendai Makoni is a Mathematics and Statistics Lecturer in the Department of Mathematics and Computer Science, School of Natural Sciences, Great Zimbabwe University (GZU), Masvingo, Zimbabwe. He is a holder of MSc (Operations Research and Statistics) (NUST) and BSc Hons (Mathematics) (MSU).
Tinashe Chuchu – is a senior lecturer in the Department of Marketing, Faculty of Business Sciences, University of Witwatersrand, Johannesburg, South Africa. His research area of focus is customer decision-making, consumer behaviour and marketing management. He earned his PhD in Business Sciences (Marketing) from the University of the Witwatersrand, Johannesburg, South Africa.
Eugine T. Maziriri–is currently a senior lecturer in the Department of Business Management at University of the Free State (UFS) in Bloemfontein, South Africa. He teaches marketing courses to under-graduate and post-graduate students. He has a keen interest in consumer behaviour, green marketing and the development of small businesses. In addition, he has published numerous papers in peer-reviewed, international and local accredited journals. He earned his PhD in Business Sciences (Marketing) from the University of Witwatersrand, Johannesburg, South Africa.
Anyway Chare is currently pursuing a Ph.D. in Information Technology research at the Durban University of Technology (DUT) in South Africa. His qualifications include the Master of Science in Information Systems degree from the National University of Science and Technology (NUST), the Bachelor of Science Information Systems Honours Degree from the Midlands State University (MSU), the Higher National Diploma in Computer Studies, the National Diploma in Computer Studies and the National Certificate in Computer Studies, all attained at Kushinga Phikelela Polytechnic in Marondera. Also a holder of the Vocational Diploma in Satellite and Internet Delivery of Educational Television and Multimedia from the United States Telecommunications Training Institute (USTTI) in Washington, DC.