The use of AI in government and its risks: lessons from the private sector

Ricardo Santos (School of Economics and Management, University of Porto, Porto, Portugal)
Amélia Brandão (School of Economics and Management and CEFUP, University of Porto, Porto, Portugal)
Bruno Veloso (School of Economics and Management and INESCTEC, University of Porto, Porto, Portugal)
Paolo Popoli (Department of Management and Quantitative Studies, University of Naples Parthenope, Naples, Italy)

Transforming Government: People, Process and Policy

ISSN: 1750-6166

Article publication date: 22 August 2024

1107

Abstract

Purpose

This study aims to understand the perceived emotions of human–artificial intelligence (AI) interactions in the private sector. Moreover, this research discusses the transferability of these lessons to the public sector.

Design/methodology/approach

This research analysed the comments posted between June 2022 and June 2023 in the global open Reddit online community. A data mining approach was conducted, including a sentiment analysis technique and a qualitative approach.

Findings

The results show a prevalence of positive emotions. In addition, a pertinent percentage of negative emotions were found, such as hate, anger and frustration, due to human–AI interactions.

Practical implications

The insights from human–AI interactions in the private sector can be transferred to the governmental sector to leverage organisational performance, governmental decision-making, public service delivery and the creation of economic and social value.

Originality/value

Beyond the positive impacts of AI in government strategies, implementing AI can elicit negative emotions in users and potentially negatively impact the brand of private and government organisations. To the best of the authors’ knowledge, this is the first research bridging the gap by identifying the predominant negative emotions after a human–AI interaction.

Keywords

Citation

Santos, R., Brandão, A., Veloso, B. and Popoli, P. (2024), "The use of AI in government and its risks: lessons from the private sector", Transforming Government: People, Process and Policy, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/TG-02-2024-0038

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Ricardo Santos, Amélia Brandão, Bruno Veloso and Paolo Popoli.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Governments’ rapid technological evolution and increasing utilisation of artificial intelligence (AI) are unleashing numerous opportunities globally and revolutionising conventional methodologies. This paradigm shift extends beyond traditional service provision and policy formulation, instigating swift changes in governmental practices and a quest for greater community participation (Awasthi et al., 2012). This development plays a pivotal role in propelling innovation, ensuring sustainability, fostering competitiveness and ultimately enhancing the quality of global life (Dwivedi et al., 2022). Technological development has transmitted human capabilities to AI, which questions researchers’ assumptions about human–AI interaction (Stone et al., 2020). This development has allowed AI to develop the capacity to understand, react, learn and adopt functions and characteristics that are increasingly close to humans (McDuff and Czerwinski, 2018) while breaking down the boundary between humans and technology. In a previous literature review, it was observed that individuals tend to treat AI systems like humans while interacting with them, and some users even allow a greater connection with the AI system than with people (Arnd-Caddigan, 2015). This rapid development and interaction raise questions beyond its benefits. Moreover, research on human–AI interaction has primarily focused on its applicability and benefits, leaving a gap in the literature regarding the type of negative emotions this interaction elicits among consumers (Alsheiabni et al., 2019). Furthermore, AI has the potential to alter fundamental aspects of operation in the private sector, particularly in business management, as well as in the public sector, within governments and other organisations. So, this study aims to:

  • provide a global understanding of the use of AI in corporate brands, mainly private; and

  • explore the negative emotions and perceptions of human–AI interactions.

The following research questions were addressed:

RQ1.

What are the negative emotions perceived by the user in human–AI interactions?

RQ2.

What lessons can be helpful for the government sector through the lens of the private sector?

For that, this research conducted a deep online community study through the analysis of Reddit open-source, which allows us to assess the existence of brand hatred and fill the gap in the literature on this topic in the most varied fields of social sciences (Kempeneer et al., 2023; Khatoon and Rehman, 2021; Nanda and Banerjee, 2020). Belonging to a group contributes to an individual’s greater involvement and active participation (Martínez-Torres, 2014). Therefore, establishing brand communities can play an essential role in implementing these new consumer interactions, which is critical for developing new ideas that can lead to change and increase profitability (Hofstetter et al., 2021). This study used a netnographic methodology using a data mining approach. The sentiment analysis (SA) was used to extract information before data collection, which was applied to achieve an overall understanding of the negative feelings and emotions perceived after the consumer experiences a human–AI interaction. These include emotions, beliefs, preferences, perceptions, behaviours and achievements that occur during or after interacting with a particular product or service (Nguyen and Tran, 2023). Inefficient task performance and negative experiences are service failures that affect the user’s perception of AI systems (Anaza et al., 2021; Blöcher and Alt, 2021). Our study contributes to how digital technologies influence government activities, the services offered, and, therefore, the effectiveness of the action concerning the problems of the administered community. This paper analysed the private sector because it can provide valuable lessons for the government in using AI in its activities, especially in relational and communication processes with its users. Furthermore, the private sector offers the possibility of having data relating to consumer communities, on which it is possible to analyse consumer reactions to AI tools. The key contributions of this research are twofold. First, it is the first research to determine whether the prevailing sentiments regarding the use of AI are primarily positive or negative and its consequent impact on user dissatisfaction. Second, this paper has identified the predominant negative emotions stemming from this interaction. The subsequent sections of this paper are structured as follows: section 2 provides an overview of AI and its implications in the consumer-brand relationship, mainly focusing on negativity; section 3 outlines the proposed methodologies for identifying the primary consumer feelings resulting from the relationship between humans, AI and brands; section 4 presents the findings from the experiments and tests that were conducted and section 5 offers a discussion regarding the outcomes of this study; finally, section 6 highlights the conclusions, limitations of the study and directions for future research.

2. Literature review

AI is one of the most innovative technologies associated with human thinking, allowing machines to assume cognitive functions and perform intellectual tasks such as problem-solving, reasoning and autonomous learning (De Bruyn et al., 2020). Technological advancements in AI and machine learning (ML) have reshaped industries. This evolution has led to some industries becoming obsolete as automation takes over tasks efficiently performed by robots. AI technology enables real-time data collection, analysis, personalised recommendations and experiential learning, enhancing human cognitive capabilities significantly (McKinney et al., 2020; Munoko et al., 2020). Using sophisticated algorithms and the ability to learn (ML), AI rapidly transforms the business world and generates tremendous interest among researchers. AI’s ability to trigger emotions makes it essential for analysing how organisations interact with consumers (Kim et al., 2022). This ability has been crucial for its rapid ascension in the service industry and, consequently, to various areas of business, defence, design, health and the finance sector (McKinney et al., 2020; Munoko et al., 2020). In the governmental sector, the substantial potential of AI is increasingly acknowledged and applied to enhance organisational performance, governmental decision-making, public service delivery and the creation of public value by incorporating AI into organisational and governmental strategies (Wang et al., 2021). Intelligent public administration contributes to promoting the effectiveness of public services and influences the development of the three pillars of sustainable development: the economy, society and the environment (McKinney et al., 2020). However, beyond the positive impacts, the implementation of AI carries a potential negative impact on the brand of public organisations and governments, which is increasingly evident. Effective government policies make it possible to mitigate these risks and ensure a responsible and beneficial implementation of AI (Cihon et al., 2020). Analysing how the private sector implements AI allows for formulating regulatory frameworks and fundamental ethical guidelines to guide the responsible deployment of AI technologies. Regulation in the public sector mitigates risks and standardises practices across all industries (Nanda and Banerjee, 2020).

2.1 Artificial intelligence

2.1.1 Artificial intelligence and marketing.

AI in marketing offers the significant advantage of automating the analysis of large volumes of data, enhancing the marketing mix (Tu et al., 2005). In digital marketing, AI promotes products more efficiently and customises offers based on user data, thereby increasing consumer satisfaction and brand loyalty (De Bruyn et al., 2020). Companies foresee AI’s cognitive capabilities surpassing those of humans, prompting them to replace human resources with AI due to its numerous benefits, including process efficiency, reduced acquisition costs, marketing expenses, customer profiling, campaign customisation and increased margins (Huang and Rust, 2021; Nguyen and Tran, 2023; Sternberg, 2003). Research highlights the necessity of integrating technological and human elements and balancing big data with small data (Nguyen and Tran, 2023). Text mining objectively evaluates vast amounts of data, particularly from customer reviews on websites like Yelp or TripAdvisor, which are viewed as credible sources that aid consumers in making informed decisions (Filieri et al., 2015). Text mining is used in various fields, including product planning, marketing and digital marketing, by analysing social media data to derive marketing insights (Alalwan et al., 2017; Jeong et al., 2019). AI, text mining and fuzzy linguistic models together enhance decision-making by automating data analysis, extracting insights from text and effectively managing uncertainty, thus empowering organisations with robust decision support and improving operational efficiency and strategic outcomes (Ciasullo et al., 2018). Supervised learning methods, such as support vector machines, predict outcomes based on identified predictors, whereas unsupervised methods, like non-negative matrix factorisation, uncover underlying data patterns without specific target variables. Despite abundant textual information from sources like emails and blogs, efficiently extracting relevant content remains a challenge (Verma et al., 2021). SA, which uses textual data from social media, review websites, blogs, forums and interview transcripts, has been applied in sectors such as health care during COVID-19 (Waldherr et al., 2017), politics (OECD, 2022), education (Yau et al., 2021) and product evaluation (AL-Sharuee et al., 2021). It assesses feelings and opinions through expressive styles, with recent algorithms improving SA in product reviews. However, challenges such as spam, false data, domain specificity, negation, natural language processing (NLP) complexity, bipolar terms and extensive vocabulary persist (Collobert et al., 2011).

NLP enables computers to analyse, understand and handle human language, thereby executing tasks (Agrali and Aydin, 2021). SA models, built upon lexicon libraries, determine the emotions conveyed in a text, classified as positive, negative or neutral. These models are crucial in brand communication management, as user-generated and company-generated content sentiments significantly impact customer-brand relationships (Sun et al., 2023).

2.1.2 Applications of artificial intelligence in government.

Assessing the influence of AI integration on the interactions between the public and private sectors with individuals is vital, considering both positive and negative impacts. For example, adopting data-driven processes has significantly improved transparency and operational efficiency, addressing information asymmetry in public–private partnerships (Aben et al., 2021).

However, the development and proliferation of AI pose challenges and questions related to the labour market, sustainable development, user privacy and security, transparency, conflicts of interest and ethical/social dilemmas (Buehler et al., 2021; Desouza et al., 2020; Malik et al., 2022; Nida-Rümelin and Weidenfeld, 2022). These challenges are more pronounced in the public sector, where decision-makers face constraints from laws, rules and practices (Desouza et al., 2020), alongside issues such as a lack of qualified workforce, limited investment and unclear regulations essential for ensuring transparent, secure, ethical and people-centric applicability (Buehler et al., 2021). Despite these challenges, governments can leverage AI potential through tailored approaches aligned with local policies and economic structures.

Differences between the public and private sectors diminish in key implementation areas:

  • back-office automation for content synthesis using robotics, NLP and computer vision, exemplified by Singapore’s GovTech application for text summarisation and report generation;

  • software development, such as the HM Treasury’s use of GitHub copilot to accelerate development;

  • optimisation of public services and user interaction through recommendations, chatbots and personal assistants, as seen in Germany’s “Lummi” for accessing government services and “Ask Jamie” used in Singapore;

  • content creation includes emails, social media posts, contracts and proposals, with examples like the US Department of Defense using AI software “ACQBOT” to accelerate procedural contract writing; and

  • in health care, using data analysis for disease prevention, diagnosis and treatment is exemplified by the UK’s National Health Service establishing a National COVID-19 chest imaging database to develop AI technologies for treating COVID-19 subsequently (Berglind et al., 2022).

The interdependencies between the public and private sectors in the context of AI are projected to grow, with collaborative initiatives playing a critical role in addressing societal challenges. As AI technologies evolve, their deployment across both sectors will likely catalyse further innovations. This concurrent advancement is projected to intensify, propelled by collective commitments to sustainable development and technological inclusivity (Zhang et al., 2020).

2.2 Human–artificial intelligence interaction

AI and marketing connect humans to machines in many ways. The first way is by completely replacing humans with AI, as in with real-time recommendations and advertising. A second method involves collecting and analysing data, which is subsequently used to assist in decision-making, employee hiring, enhancing consumer relationships and improving the pre-delivery, service journey/delivery experience and post-delivery phases (Tõnurist and Hanson, 2020). The third method also relies on data collection, but unlike the second, the AI system makes the decisions, not the user. Deciding which operating mode to use depends on the type of business (Yadav and Chakrabarti, 2022).

As AI technology advances, human interactions continue to evolve. This becomes particularly important given the need for more confidence and reservations regarding the use of AI in certain circumstances (Alsheiabni et al., 2019). While automation is widely perceived as advantageous, the AI literature highlights numerous challenges and deficiencies. Despite considerable optimism about the future advancements and uses of AI, a cautious sentiment exists concerning the speed and breadth of its potential influence (Moore and Chuang, 2017; Silva, 2019).

The increasing complexity of systems is one of these problems which has led to the “black box”. This term generally describes the lack of transparency associated with something. This technology has become increasingly complex and challenging to analyse. As a result, the user needs help understanding the processes behind the tasks performed by AI. This generates an increased level of distrust, perceived complexity and resistance to the use of systems (Dwivedi et al., 2022; Nagar and Malone, 2011; Simon, 2019). “Technostress” is a term suggested by Craig Brod in 1894 (Hackley, 2023) and is used to describe the physical and psychological symptoms caused by excessive use of technology (Troisi et al., 2023), the pressure one feels to have to keep up with the latest technology, or even the feeling of being overwhelmed by the amount of technology available (Tanna et al., 2020).

2.3 Negativity in consumer–brand relationship

A strong brand depends on a consumer’s perception of its attributes and qualities and their feelings for the brand. Web 3.0 is built on three key elements: semantic analysis, AI and social interactions (Cheng, 2024). These three elements make it possible to analyse the sequence of data consumers provide while interacting with websites and translate that data into useful information for brands and management. Understanding emotions helps redefine AI applications and aligns decision-making with humans’ values and social norms. Brand hatred is a mix of feelings that includes anger, fear, aversion and disgust (Sparks and Browning, 2011). Brand hatred is one of the most studied negative emotions for three main reasons: the ability “brand haters” have to damage brands, the fact that brand hate is the most intense feeling that a consumer can feel about a brand and because brand hate arises due to poor performance or customer dissatisfaction with a product or service (Hegner et al., 2017; Khatoon and Rehman, 2021). This hate can lead to consumer activism against brands, potentially drastically declining brand loyalty and preference (Aziz and Rahman, 2022). Consumption is not just a response to an external phenomenon; it is also influenced by emotions and feelings (Hackley, 2023). Emotions act as a catalyst for all humans (Fonberg, 1986), and emotion analysis has been widely used to improve relationships between consumers and organisations.

The analysis of negative emotions (NE) experienced by consumers is a significant concern for brands and organisations because it is consumers and their interaction with products and services that allow a brand and organisations to identify trends in behaviour and determine the type of approach it should adopt to decrease the opposition effect, as this is stronger than the support effect (Banister and Hogg, 2004).

NE arise when expectations fall short, and objectives are not achieved, evoking anger, distrust, dissatisfaction and frustration (Christodoulides et al., 2021). NE have different intensities, starting from “brand dislike” (apathy towards brands) to “brand hate” (a hatred of brands). Brand Hate is the most negative consumer position towards a brand (Yu et al., 2020).

2.3.1 Anti-brand communities and electronic word of mouth.

Negative emotions can be expressed individually or in communities. Consumers who act alone demonstrate their displeasure through complaints, brand abandonment and, in some cases, even looking for ways to get revenge on the brand (Yu et al., 2020; Zarantonello et al., 2016). Consumers who express their negative emotions in brand communities achieve greater expression through word of mouth (WOM), especially on social networks and online forums (Chu and Kim, 2011).

Anti-brand communities consist of individuals harbour negative sentiments towards a brand, actively sharing and criticising its actions. Within these groups, the brand becomes a focal point that shapes interactions and fosters a sense of belonging among like-minded individuals, influencing their relationships with brands (Patton et al., 2014).

The negative feelings of online anti-brand communities are usually associated with cultural, technological, political and legal issues (Awasthi et al., 2012). The poor quality of products/services and working conditions demonstrates these feelings. They are also connected to a desire to promote and grow the community, a lack of emotional compatibility and dissatisfaction with business practices, ideologies and economic systems (Krishnamurthy and Kucuk, 2009).

Negative WOM, which results in negative attitudes (Bambauer-Sachse and Mangold, 2011), involves sharing “reviews” which warn or inform about service failures. These reviews do not, however, suggest that brand hatred is behind the relatively damaging comments, as evidenced by research conducted by Frank et al. (2023) and Romani et al. (2015). The researchers in this study not only considered “brand hate” but also considered the consumer. These negative comments may be made for social reasons, mainly if a “reviewer” is engaged in self-promotion. E-WOM is a concern for organisations as negative comments tend to have more impact than positive comments and can also influence decision-making processes (Skraaning and Jamieson, 2021).

3. Methodology

This section covers the study’s topic, research objective and questions. It introduces the conceptual model as the theoretical framework for data analysis and hypothesis formation. Due to the unavailability of prestructured data, the methodology relies on web mining, scraping tools and SA techniques to ensure meaningful and reliable results. The choice of Reddit for data collection is justified, and suggestions for future research are provided. A rigorous methodology aims to yield valid findings that are beneficial to academia and business. Finally, results are presented and discussed, and conclusions are drawn.

3.1 Netnography approach and web scrapping

Introducing new augmented reality, virtual reality and AI technologies affect consumer experiences. It can have both positive (efficiency, satisfaction, loyalty) and negative (isolation, frustration, other types) impacts on consumers and society (Dwivedi et al., 2022; Gaudioso et al., 2017; Nagar and Malone, 2011; Simon, 2019). We decided to carry out an exploratory study using netnography based on the following considerations:

  • the purpose of this investigation;

  • the fact that virtual assistants are replacing traditional face-to-face interactions;

  • the adverse effects of technology on customer interaction that are still poorly investigated and understood; and

  • the lessons from the private sector that can be useful for public government sectors.

Adapted from ethnography, netnography is a qualitative research method that gains insights into online community consumer attitudes and behaviours (Kozinets, 2002) – considering the changes that internet development has prompted in terms of finding new research methods to determine the who, what, when, where and how in different online environments and virtual communities (Gebera, 2008), adopting netnography allows us to obtain authentic, uncontaminated data on the adverse feelings of AI implementation directly from consumer opinions. Current research also suggests that netnography effectively enriches theoretical knowledge about factors influencing consumer resistance in specific contexts (Huang and Rust, 2021). We made non-participatory observations in Reddit communities and then extracted the data for this study. This is because these communities are open data sources with vast information on various topics, all discussed by the public. Our data collection dates back to June 2022 and June 2023. We chose communities that met the following requirements, as suggested by Kozinets (2002): relevance, interactivity, content, heterogeneous participants, rich and detailed data and high traffic of publications and interactions. Consumers are increasingly sharing their emotions online via social media, generating a large amount of data that can be used to analyse these feelings that consumers express in several publications. Feelings of anger, sadness, happiness and excitement can be extracted from the comments and analysed to assist decision-making. Our applied method follows a sequence used by Agrali and Aydin (2021) (Figure 1):

The challenges associated with data extraction require more advanced techniques to organise, search, index and review extensive data collections in a time-efficient manner (Alghamdi and Alfalqi, 2015). Data scraping on Reddit allowed us to analyse users’ feelings and opinions towards brands. SA is a method that allows one to automatically extract opinions from reviews by combining language processing with ML algorithms to assign sentiments to phrases or expressions. This gives us an idea about a customer’s opinion, reputation, concerns, experience, perception and general public satisfaction index (Dwivedi et al., 2022). Once the “reviews” have been collected, SA can obtain information that can be used to improve marketing strategies (Giatsoglou et al., 2017). Several possible methods for performing sentiment analysis exist, such as Bert, IBM Watson, Textblod and VADDER (Agrali and Aydin, 2021). These models consist of lexicon libraries that label feelings as positive or negative and then classify them into binary classes (positive or negative) (Sun et al., 2023).

3.1.1 Gather data.

We chose to obtain the data for this study from Reddit due to its unique set of characteristics that can be crossmatched with social networks. Reddit is a social network that allows its users to publish content in pseudonymity, allowing them to communicate anonymously about sensitive topics without fearing social repercussions (Milano et al., 2014). Due to their openness, online forums like Reddit allow users to share honest opinions and emotions about controversial issues and give them the freedom to post provocative topics they might not otherwise feel comfortable discussing in real life (Wirtz et al., 2019).

We categorised data into positive, negative or neutral through pre-processing and sentiment analysis to identify negative sentiments resulting from human interaction with AI and their association with brands. We analysed posts and comments from Anticonsumption (617,534 users), Artificial Intelligence (202,161 users), Branding (14,958 users) and Metaverso (40,542 users), totalling 875,195 users relevant to our research, basing the search on a set of words (Table 1).

Data were collected via web scraping using the Python Reddit API Wrapper (PRAW), stored in CSV format and subjected to text pre-processing and sentiment analysis using the Valence Aware Dictionary and Sentiment Reasoner (VADER) and Sentiment Intensity Analyzer (SIA) in the NLTK library. SIA determines text polarity and intensity with scores ranging from −1 (unfavourable) to 1 (positive). The Python code was compiled in Google Colab Notebook.

Pre-processing addressed informal language elements like emojis and URLs prevalent in Reddit data, enhancing polarity measurement accuracy. VADER and SIA were chosen for their simplicity, speed, ability to gauge sentiment intensity and seamless integration into Python.

Overall, our methodology leveraged Reddit’s diverse and current data to effectively examine consumer sentiments towards AI and brands.

4. Results

Sentiment analysis algorithms commonly categorise data into distinct classes or groups. Many such algorithms can be trained via machine learning techniques to enhance the precision of their outcomes (Romani et al., 2015). The SIA, a VADER library component, is explicitly optimised for analysing sentiments articulated in contemporary English, rendering it particularly effective for this study’s data type. SIA uses a lexicon comprising lexical features, namely, words, annotated based on their positive or negative semantic orientation. Each word within this lexicon is assigned an intensity score that indicates the degree of positivity or negativity.

The findings from this research indicate a predominance of positive sentiments concerning AI and consumer interactions, as shown in Figure 2; however, a notable proportion, ranging from 23% to 35%, exhibits negative sentiments. Our results show a prevalence of positive feelings concerning AI and consumer interaction; however, there is a percentage between 23% and 35% that registered negative feelings.

After performing the sentiment analysis, the data were organised in a word cloud to observe which words were used the most to express each of the feelings, negative or positive.

The word cloud provided a more in-depth study of negative feelings that appeared more frequently. Our observations showed that the most present negative feeling was “hate”. To discover the predominant negative emotions, we analysed the frequency of critical negative sentiments described in the literature within the negative word cloud (Casado Diaz and Más Ruíz, 2002; Khatoon and Rehman, 2021). Following this initial analysis, we broadened our investigation to include additional words associated with emotions. We also found a word that represents more than a feeling; it represents a behaviour. That word was “kill.”

5. Discussion

In recent years, there has been a significant increase in the adoption of AI technologies by various brands, which often interact directly with the consumer.

However, it is crucial to recognise the challenges and limitations of implementing these systems and how to minimise negative feelings. Negative feelings can be detrimental to brands. Growing online communities like Reddit means information spreads faster and threatens the relationship between brands and consumers. Social networks are an essential data source that provides insights into consumer perceptions. However, using a single platform only provides a partial view and can make the sample less rich in terms of the diversity of users (Bode and Vraga, 2018). This study differs from prior ones in that it explores online communities and treats data exported directly from consumer comments using ML and NLP mechanisms. Our decision to choose Reddit makes the investigation even more innovative because it is a digital environment. The possibility for users to interact in pseudonymity allows them to express themselves anonymously, which is more open and sincere than in other contexts, providing richer data (Wirtz et al., 2019) conducive to studying social and behavioural phenomena. After applying the code explicitly created for this study, we found the relationship between negative, positive and neutral feelings to be relatively similar among communities (Figure 3). This strengthens the consistency of the model and opinions across communities.

While AI through automation prevents many human errors, automation levels lead to adverse results in effective user responses when critical incidents occur (Moore and Chuang, 2017). This paradox shows that technology’s increased functional benefits can simultaneously increase consumers’ risk and fear based on negative experiences (Johnson et al., 2008). Some of the feelings identified in our study include “Threat, Fear, Frustration, Anxiety, and Worried” (Figures 4 and 5).

This behaviour suggests that consumers may experience negative emotions towards brands and express a collective desire to “Kill” the brand in AI, which goes beyond an individual feeling. It implies intense hostility and can have significant implications for brands because it involves rejection and the customers’ desire to eliminate the presence and influence of the brand in their lives. Indeed, this feeling has never been previously found in a consumer–brand relationship. It can also mean that users perceive that when investing in AI, the brand can autodestroy the brand, i.e. meaning a long-term auto-suicide strategy. Hence, “Kill” represents action and behaviour, besides hate being considered an emotion in the literature review. It can represent a hate emotion escalation for a more severe behaviour consequence, which was explicitly identified as being “Kill”. It implies intense hostility and can have significant implications for brands because it involves rejection and the customers’ desire to eliminate the presence and influence of the brand in their lives. This indicates that brands need to be aware of the possibility of evoking such extreme feelings in consumers and take measures to avoid or reverse this hostility. Brand hatred is a very present feeling in consumer/user dissatisfaction. The change in consumer behaviour is evident in all areas. This hostility reinforces these changes in consumer behaviour and the growing importance of studies on brand hatred, which are increasingly susceptible to dissatisfaction, especially when they fail to meet community expectations or do not consider what consumers and society value (Scuotto et al., 2023). The percentage and frequency of negative feelings also show the need to understand the negative interactions between consumers and artificial intelligence. Our data collection from Reddit showed that when organisations make decisions that consumers find difficult to understand, they tend to experience hate and are more likely to describe a negative experience or review (Christodoulides et al., 2021). The percentage and frequency of negative feelings also show the need to understand the negative interactions between brands, consumers and AI.

Unlike previous studies, this study’s results confirmed that it is essential to monitor consumers’ feelings and their relationship with brands and organisations in the corporate and academic spheres (Wu et al., 2021). Brand hatred comprises emotions such as anger, contempt, disgust, sadness and fear (Fetscherin, 2019). Our study proved this with the emergence of emotions such as “Disgust, Anger, Frustration, Guilt, Regret, Shame, Hatred, Scam, Anxiety, Warned, Fear, and Worried”. Going through a list of negative feelings and words allowed us to verify the implicit behavioural nature of this study, as shown in Table 1. Words like “kill” and “warned” appeared and represented more than a feeling; they represent behaviour and actions that harm the organisation.

Regarding the second research question of this study, “What lessons can be useful for the government sector through the lens of the private sector?”, the existence of a non-marginal percentage of consumers who express negative feelings towards the use of AI represents a “red flag” that should not be underestimated. In this regard, the first consideration to make is that the discussion on the use of AI in public administration is part of the eternal dualism between the efficiency of the functioning of the organisational machine (especially in terms of reduction of time and better use of resources), on the one hand, and guarantee of fairness and transparency of administrative action, accessibility, protection and security of data, on the other. The first challenge of AI in government lies in this dualism because there is no doubt that it can contribute to the improvement of the management of work processes (Tarafdar et al., 2007), but, at the same time, it is precisely on issues related to the system of public guarantees that the concerns and scepticism of a non-marginal number of private individuals are based. According to recent research carried out by FPA, an Italian services and consultancy company for the innovation of public administrations, in January 2024, a high degree of scepticism emerged regarding citizens’ perception of AI; 46% are critical of the actual impact of AI on public administration, with 26% moderately doubting it and 20% being openly critical, believing that the public administration is not adequately prepared to meet the challenges resulting from the adoption of AI. Only 24% recognise a strong potential for AI in strengthening public administration. Finally, a significant 30%, therefore almost a third of citizens, did not respond, highlighting the topic’s uncertainty or lack of knowledge. However, the risk that must be considered in using AI in public administrations concerns the possible negative repercussions on the relationship between citizens and public administrations. The emotional involvement that consumers reveal is an element to which maximum attention must be paid because it concerns one of the elements that characterise the paradigms of “new public governance” (Lynn, 2010) and “open government” (Gao et al., 2023; Kempeneer et al., 2023), which consists of the objective of making the public administration open to citizen participation, and of making them, where possible, an active part of public action. Both conceptual paradigms consider the citizen not a “client” but a “co-creator” of choices negotiated or shared with the public administration (Cortés-Cediel et al., 2023) and tend to involve the citizen in the decision-making process also to increase the legitimacy of the public action in contemporary democracies (Krogh and Triantafillou, 2024; Nicolescu and Tudorache, 2022). From this objective arises the need for public action to be characterised by maximum transparency and, therefore, the risk of the opacity of the criteria and elements on which the AI’s decisions are based is certainly greater in the case of public action compared to the action of private brands. In this regard, our research has highlighted that when brands make decisions that consumers find difficult to understand, they tend to feel hate and are more likely to spread and share this negative feeling. In the case of public administration, this non-transparency of decisions is a hazardous element that can generate negative feelings in the community, leading to the most extreme expressions of hate and contempt. The results of our research are comforting in indicating that the greatest challenge of using AI in public administration lies in its ability to increase the degree of “citizen satisfaction” (Zarantonello et al., 2016), attributing to the relationship between citizens and public administration a character of greater interactivity, together with greater flexibility of administrative action concerning the specificities of the individual case to be managed. In public administration, even more than in the private sector, the concept of “digital humanism” is central (Fuchs, 2022; Saura et al., 2023; Schmoelz, 2023), where the person is at the centre of attention and not machines. The person’s centrality also implies that the action of public administration can be flexible in providing responses calibrated to the specificities of citizens’ differentiated needs. The use of AI could generate negative feelings in citizens, such as “threat, fear, frustration, anxiety, worry” linked to the risk that AI is more rigid than human action, failing to foresee those compromise solutions often necessary in the adaptation of general rules and criteria to the different and specific cases observable in reality. It should also be added that not all citizens have the same ability to interact with AI tools, so this difficulty can represent a further driver of negative feelings. A further lesson that we draw from the results of our study concerns the risk that negative feelings can lead to the desire to “kill” the individual/brand/organisation that uses AI tools. In the case of public administration, this risk means that a critical fracture can be created between the citizen and the public administration, which can lead to actions and behaviours even with a certain amount of violence. In the event of a breakdown in the relationship, while a consumer–brand relationship can end, a relationship between citizen and public administration cannot end, and this, therefore, represents a significant difference in the consequences of hatred.

6. Conclusions

Our results identified negative feelings like hate, anger, fear, worry, threat, warning, danger and regret – in other words, expressing negative feelings, such as bad, mistake and wrong, predominated. We generally found most feelings positive, but about 30% of the feelings we identified were negative. After analysing the types of feelings and negative words in the comments and publications, we found that most of the feelings were related to distrust and uncertainty caused by consumers’ difficulties in keeping up with the rapid evolution of AI and the impressionability of its future evolution (Friedman et al., 2000).

Carrying out an experimental study on the subject has provided the opportunity to test feelings and relationships, which then help validate hypotheses and support the conclusions drawn in this context. Furthermore, this study extends its entire line of research to the context of online communities, investigating eWOM and its role in this relationship.

Negative WOM arises from negative experiences (Bambauer-Sachse and Mangold, 2011). It proliferates through social networks and significantly influences evaluations, purchase intentions and consumer relationships with brands, causing significant changes in the perceptions and attitudes of consumers (Patton et al., 2014; Zarantonello et al., 2016), particularly when experiencing such extreme feelings.

The data from this study also opened up new research horizons by identifying a more extreme feeling/behaviour than brand hatred: “kill”. This feeling, rarely explored in the literature, may be related to the type of users on a platform, anonymity, or even a growing escalation of violence on social networks (Park et al., 2023). The adverse effects of this relationship and the introduction of an even more extreme feeling than hatred are tangible contributions that this study has put forward. Our study adds a vital conceptual contribution to AI literature, brand management and governance, confirming the presence of strong negative feelings due to the interaction between brand consumers and AI (Romani et al., 2015).

Based on our findings, if organisations introduce AI systems, we suggest they do so in stages and use sentiment analysis to assess consumer reactions in the different social networks and online communities. By doing this, they can evaluate their strategies, avoiding the proliferation and escalation of negative feelings. During each implementation stage and after these systems have been implemented, organisations should seek to monitor their consumers and explain the advantages, reasons, and how these systems work. AI systems must also be designed with the target audience in mind and be transparent, informative and safe, gaining trust from consumers/users while they interact with the brand (Simon, 2019). Constant monitoring user feelings will also help avoid negative experiences and reduce brand hatred, ensuring that decisions are made considering users’ expectations.

As in private organisations, public organisations and governments face various AI-related risks. The degradation of public brands, whether international organisations, national entities or other similar public bodies, can threaten society’s foundations (Zhao and Gómez Fariñas, 2023). Therefore, analysing the negative impact of the user’s interaction with AI on governance is also crucial. The use of AI in government presents more significant risks than in the private sector because it is part of the eternal dualism between the efficiency of the organisational machine, on the one hand, and the system of guarantees that characterises public action (fairness, non-discrimination, transparency, inclusiveness, data protection and security, etc.), on the other. Furthermore, AI in public administration must be seen within the principles and criteria of “new public governance” and “open government”, which tend to open public administration to citizen participation in decision-making and service delivery. Human–AI interaction, therefore, represents a challenge that will characterise the public administration in the future. Consequently, it is a cause for concern to have found that, in the private sector, around 30% of individuals express negative feelings towards AI. The empirical evidence this study has revealed in the private sector constitutes essential “lessons” for the government in using AI because they represent “red flags” and confirm the risks that AI presents in citizen-AI interaction.

The main limitation of this study is its sample size. While numerous comments and publications from platform users were analysed, few discussions focused on brands (Birjali et al., 2021). Another limitation pertains to the type of platform users. The data was sourced from active users on AI-focused platforms, who may have greater knowledge and expertise, potentially reducing mistakes and negative sentiments. Future research could establish new constructs, variables, or indicators to examine brand-related risks in particular scenarios or sectors specifically. Although the required methodology would be innovative, it would be based on data extraction from UGC (User Generated Content). Comparing this data with other sources, such as surveys or interviews, could provide insights from a less technologically informed audience.

Figures

Data collection and analysis steps

Figure 1.

Data collection and analysis steps

Sentiment analysis results

Figure 2.

Sentiment analysis results

Wordcloud with negative and positive sentiments

Figure 3.

Wordcloud with negative and positive sentiments

Number of times each word appears in the comments

Figure 4.

Number of times each word appears in the comments

General overview of word frequency and negative sentiments in the comments

Figure 5.

General overview of word frequency and negative sentiments in the comments

Words used in researching comments and posts within communities

SUBREDDIT Query Data
Metaverse [“brand”, “branding”, “branded”, “brands”] Searched through the
last 1,000 publications
Anticonsumption [“Artificial intelligence”, “chatbot”, “virtual reality”, “Augmented reality”, “brand”, “branding”, “branded”, “brands”, “AI”]
Artificial intelligence [“brand”, “branding”, “branded”, “brands”]
Branding [“AI”, “artificial intelligence”, “chatbot”, “virtual reality”, “Augmented reality”, “brand”]

Source: Created by authors

References

Aben, T.A., van der Valk, W., Roehrich, J.K. and Selviaridis, K. (2021), “Managing information asymmetry in public-private relationships undergoing a digital transformation: the role of contractual and relational governance”, International Journal of Operations and Production Management, Vol. 41 No. 7, pp. 1145-1191.

Agrali, O. and Aydin, O. (2021), “Tweet classification and sentiment analysis on metaverse related messages”, Journal of Metaverse, Vol. 1 No. 1, pp. 25-30.

Alalwan, A.A., Rana, N.P., Dwivedi, Y.K. and Algharabat, R. (2017), “Social media in marketing: a review and analysis of the existing literature”, Telematics and Informatics, Vol. 34 No. 7, pp. 1177-1190.

Alghamdi, R. and Alfalqi, K. (2015), “A survey of topic modelling in text mining”, 2015International Journal of Advanced Computer Science and Applications (IJACSA), Vol. 6 No. 1.

Al-Sharuee, M.T., Liu, F. and Pratama, M. (2021), “Sentiment analysis: dynamic and temporal clustering of product reviews”, Applied Intelligence, Vol. 51 No. 1, pp. 51-70.

Alsheiabni, S., Cheung, Y. and Messom, C. (2019), “Factors inhibiting the adoption of artificial intelligence at organizational-level: a preliminary investigation”, Americas Conference on Information Systems, Association for Information Systems.

Anaza, N.A., Saavedra, J.L., Hair, J.F. Jr., Bagherzadeh, R., Rawal, M. and Osakwe, C.N. (2021), “Customer-brand disidentification: conceptualization, scale development and validation”, Journal of Business Research, Vol. 133, pp. 116-131.

Arnd-Caddigan, M. (2015), Sherry Turkle: Alone Together: Why We Expect More from Technology and Less from Each Other, Basic books, New York, NY, p. 2011.

Awasthi, B., Sharma, R. and Gulati, U. (2012), “Anti-branding: analyzing its long-term impact”, Journal of Brand Management, Vol. 9 No. 4, pp. 48-65.

Aziz, R. and Rahman, Z. (2022), “Brand hate: a literature review and future research agenda”, European Journal of Marketing, Vol. 56 No. 7, pp. 2014-2051.

Bambauer-Sachse, S. and Mangold, S. (2011), “Brand equity dilution through negative online word-of-mouth communication”, Journal of Retailing and Consumer Services, Vol. 18 No. 1, pp. 38-45.

Banister, E.N. and Hogg, M.K. (2004), “Negative symbolic consumption and consumers’ drive for self-esteem: the case of the fashion industry”, European Journal of Marketing, Vol. 38 No. 7, pp. 850-868.

Berglind, N., Fadia, A. and Isherwood, T. (2022), “The potential value of AI—and how governments could look to capture it”, McKinsey, available at: www.mckinsey.com/industries/public-and-social-sector/our-insights/the-potential-value-of-ai-and-how-governments-could-look-to-capture-it

Birjali, M., Kasri, M. and Beni-Hssane, A. (2021), “A comprehensive survey on sentiment analysis: approaches, challenges and trends”, Knowledge-Based Systems, Vol. 226, p. 107134.

Blöcher, K. and Alt, R. (2021), “AI and robotics in the european restaurant sector: assessing potentials for process innovation in a high-contact service industry”, Electronic Markets, Vol. 31 No. 3, pp. 529-551.

Bode, L. and Vraga, E.K. (2018), “Studying politics across media”, Political Communication, Vol. 35 No. 1, pp. 1-7.

Buehler, K., Dooley, R., Grennan, L. and Singla, A. (2021), “Getting to know-and manage-your biggest AI risks”, available at: www.mckinsey.com/capabilities/quantumblack/our-insights/getting-to-know-and-manage-your-biggest-ai-risks

Casado Diaz, A.B. and Más Ruíz, F.J. (2002), “The consumer’s reaction to delays in service”, International Journal of Service Industry Management, Vol. 13 No. 2, pp. 118-140.

Cheng, S. (2024), Future of Web 3.0, Springer Nature Singapore, Singapore, pp. 195-217.

Christodoulides, G., Gerrath, M.H. and Siamagka, N.T. (2021), “Don’t be rude! The effect of content moderation on consumer-brand forgiveness”, Psychology and Marketing, Vol. 38 No. 10, pp. 1686-1699.

Chu, S.C. and Kim, Y. (2011), “Determinants of consumer engagement in electronic word-of-mouth (ewom) in social networking sites”, International Journal of Advertising, Vol. 30 No. 1, pp. 47-75.

Ciasullo, M.V., Fenza, G., Loia, V., Orciuoli, F., Troisi, O. and Herrera-Viedma, E. (2018), “Business process outsourcing enhanced by fuzzy linguistic consensus model”, Applied Soft Computing, Vol. 64, pp. 436-444.

Cihon, P., Maas, M.M. and Kemp, L. (2020), “Should artificial intelligence governance be centralised? Design lessons from history”, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 228-234.

Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K. and Kuksa, P. (2011), “Natural language processing (almost) from scratch”, Journal of Machine Learning Research, No. 12, pp. 2493-2537.

Cortés-Cediel, M.E., Segura-Tinoco, A., Cantador, I. and Bolívar, M.P.R. (2023), “Trends and challenges of e-government chatbots: advances in exploring open government data and citizen participation content”, Government Information Quarterly, Vol. 40 No. 4, p. 101877.

De Bruyn, A., Viswanathan, V., Beh, Y.S., Brock, J.K.U. and Von Wangenheim, F. (2020), “Artificial intelligence and marketing: pitfalls and opportunities”, Journal of Interactive Marketing, Vol. 51 No. 1, pp. 91-105.

Desouza, K.C., Dawson, G.S. and Chenok, D. (2020), “Designing, developing, and deploying artificial intelligence systems: lessons from and for the public sector”, Business Horizons, Vol. 63 No. 2, pp. 205-213.

Dwivedi, D.N., Mahanty, G. and Vemareddy, A. (2022), “How responsible is AI? Identification of key public concerns using sentiment analysis and topic modelling”, International Journal of Information Retrieval Research, Vol. 12 No. 1, pp. 1-14.

Fetscherin, M. (2019), “The five types of brand hate: how they affect consumer behavior”, Journal of Business Research, Vol. 101 No. 101, pp. 116-127.

Filieri, R., Alguezaui, S. and McLeay, F. (2015), “Why do travelers trust tripadvisor? Antecedents of trust towards consumer-generated media and its influence on recommendation adoption and word of mouth”, Tourism Management, Vol. 51, pp. 174-185.

Fonberg, E. (1986), “Amygdala, emotions, motivation, and depressive states”, Biological Foundations of Emotion, Elsevier, pp. 301-331.

Frank, D.A., Chrysochou, P. and Mitkidis, P. (2023), “The paradox of technology: negativity bias in consumer adoption of innovative technologies”, Psychology and Marketing, Vol. 40 No. 3, pp. 554-566.

Friedman, B., Khan, P.H., Jr and Howe, D.C. (2000), “Trust online”, Communications of the ACM, Vol. 43 No. 12, pp. 34-40.

Fuchs, C. (2022), Digital Humanism: A Philosophy for 21st Century Digital Society, Emerald Group Publishing, Bingley.

Gao, Y., Janssen, M. and Zhang, C. (2023), “Understanding the evolution of open government data research: towards open data sustainability and smartness”, International Review of Administrative Sciences, Vol. 89 No. 1, pp. 59-75.

Gaudioso, F., Turel, O. and Galimberti, C. (2017), “The mediating roles of strain facets and coping strategies in translating techno-stressors into adverse job outcomes”, Computers in Human Behavior, Vol. 69, pp. 189-196.

Gebera, O.W.T. (2008), “La netnografía: un método de investigación en internet”, Educar, No. 42, pp. 81-93.

Giatsoglou, M., Vozalis, M.G., Diamantaras, K., Vakali, A., Sarigiannidis, G. and Chatzisavvas, K.C. (2017), “Sentiment analysis leveraging emotions and word embeddings”, Expert Systems with Applications, Vol. 69, pp. 214-224.

Hackley, C. (2023), Doing Research Projects in Marketing, Management and Consumer Research, Routledge.

Hegner, S.M., Fetscherin, M. and Van Delzen, M. (2017), “Determinants and outcomes of brand hate”, Journal of Product and Brand Management, Vol. 26 No. 1, pp. 13-25.

Hofstetter, R., Dahl, D.W., Aryobsei, S. and Herrmann, A. (2021), “Constraining ideas: how seeing ideas of others harms creativity in open innovation”, Journal of Marketing Research, Vol. 58 No. 1, pp. 95-114.

Huang, M.H. and Rust, R.T. (2021), “A strategic framework for artificial intelligence in marketing”, Journal of the Academy of Marketing Science, Vol. 49 No. 1, pp. 30-50.

Jeong, B., Yoon, J. and Lee, J.M. (2019), “Social media mining for product planning: a product opportunity mining approach based on topic modelling and sentiment analysis”, International Journal of Information Management, Vol. 48, pp. 280-290.

Johnson, D.S., Bardhi, F. and Dunn, D.T. (2008), “Understanding how technology paradoxes affect customer satisfaction with self-service technology: the role of performance ambiguity and trust in technology”, Psychology and Marketing, Vol. 25 No. 5, pp. 416-443.

Kempeneer, S., Pirannejad, A. and Wolswinkel, J. (2023), “Open government data from a legal perspective: an AI-driven systematic literature review”, Government Information Quarterly, Vol. 40 No. 3, p. 101823.

Khatoon, S. and Rehman, V. (2021), “Negative emotions in consumer brand relationship: a review and future research agenda”, International Journal of Consumer Studies, Vol. 45 No. 4, pp. 719-749.

Kim, T.W., Jiang, L., Duhachek, A., Lee, H. and Garvey, A. (2022), “Do you mind if I ask you a personal question? How AI service agents alter consumer self-disclosure”, Journal of Service Research, Vol. 25 No. 4, pp. 649-666.

Kozinets, R.V. (2002), “The field behind the screen: using netnography for marketing research in online communities”, Journal of Marketing Research, Vol. 39 No. 1, pp. 61-72.

Krishnamurthy, S. and Kucuk, S.U. (2009), “Anti-branding on the internet”, Journal of Business Research, Vol. 62 No. 11, pp. 1119-1126.

Krogh, A.H. and Triantafillou, P. (2024), “Developing new public governance as a public management reform model”, Public Management Review, pp. 1-17.

Lynn, L.E. (2010), “What endures? Public governance and the cycle of reform”, The New Public Governance?\?}, Routledge, pp. 121-140.

McDuff, D. and Czerwinski, M. (2018), “Designing emotionally sentient agents”, Communications of the ACM, Vol. 61 No. 12, pp. 74-83.

McKinney, S.M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M., Corrado, G.S., Darzi, A., Etemadi, M., Garcia-Vicente, F., Gilbert, F.J., Halling-Brown, M., Hassabis, D., Jansen, S., Karthikesalingam, A., Kelly, C.J., King, D., Ledsam, J.R., Melnick, D., Mostofi, H., Peng, L., Reicher, J.J., Romera-Paredes, B., Sidebottom, R., Suleyman, M., Tse, D., Young, K.C., Fauw, J.D. and Shetty, S. (2020), “International evaluation of an AI system for breast cancer screening”, Nature, Vol. 577 No. 7788, pp. 89-94.

Malik, R., Visvizi, A., Troisi, O. and Grimaldi, M. (2022), “Smart services in smart cities: insights from science mapping analysis”, Sustainability, Vol. 14 No. 11, p. 6506.

Martínez-Torres, M.R. (2014), “Analysis of open innovation communities from the perspective of social network analysis”, Technology Analysis and Strategic Management, Vol. 26 No. 4, pp. 435-451.

Milano, M., O’Sullivan, B. and Gavanelli, M. (2014), “Sustainable policy making: a strategic challenge for artificial intelligence”, AI Magazine, Vol. 35 No. 3, pp. 22-35.

Moore, C. and Chuang, L. (2017), “Redditors revealed: motivational factors of the reddit community”.

Munoko, I., Brown-Liburd, H.L. and Vasarhelyi, M. (2020), “The ethical implications of using artificial intelligence in auditing”, Journal of Business Ethics, Vol. 167 No. 2, pp. 209-234.

Nagar, Y. and Malone, T.W. (2011), Making Business Predictions by Combining Human and Machine Intelligence in Prediction Markets, Association for Information Systems.

Nanda, A.P. and Banerjee, R. (2020), “Binge watching: an exploration of the role of technology”, Psychology and Marketing, Vol. 37 No. 9, pp. 1212-1230.

Nguyen, M.T. and Tran, M.Q. (2023), “Balancing security and privacy in the digital age: an in-depth analysis of legal and regulatory frameworks impacting cybersecurity practices”, International Journal of Intelligent Automation and Computing, Vol. 6 No. 5, pp. 1-12.

Nicolescu, L. and Tudorache, M. (2022), “Human-computer interaction in customer service: the experience with AI chatbots – a systematic literature review”, Electronics, No, Vol. 11 No. 10, p. 1579.

Nida-Rümelin, J. and Weidenfeld, N. (2022), Digital Humanism: For a Humane Transformation of Democracy, Economy and Culture in the Digital Age, Springer Nature.

OECD (2022), OECD Employment Outlook 2022, OECD publishing, Paris.

Park, S., Strover, S., Choi, J. and Schnell, M. (2023), “Mind games: a temporal sentiment analysis of the political messages of the internet research agency on facebook and twitter”, New Media and Society, Vol. 25 No. 3, pp. 463-484.

Patton, D.U., Hong, J.S., Ranney, M., Patel, S., Kelley, C., Eschmann, R. and Washington, T. (2014), “Social media as a vector for youth violence: a review of the literature”, Computers in Human Behavior, Vol. 35, pp. 548-553.

Romani, S., Grappi, S., Zarantonello, L. and Bagozzi, R.P. (2015), “The revenge of the consumer! how brand moral violations lead to consumer anti-brand activism”, Journal of Brand Management, Vol. 22 No. 8, pp. 658-672.

Saura, J.R., Palacios-Marqués, D. and Ribeiro-Soriano, D. (2023), “Exploring the boundaries of open innovation: evidence from social media mining”, Technovation, Vol. 119, p. 102447.

Schmoelz, A. (2023), “Digital humanism, progressive neoliberalism and the European digital governance system for vocational and adult education”, Journal of Adult and Continuing Education, Vol. 29 No. 2, pp. 735-759.

Scuotto, V., Tzanidis, T., Usai, A. and Quaglia, R. (2023), “The digital humanism era triggered by individual creativity”, Journal of Business Research, Vol. 158, p. 113709.

Silva, K.C.D. (2019), “Why do people hate your brand?”, available at: www.forbes.com/sites/karencorreiadasilva/2019/08/21/why-do-people-hate-your-brand/?sh=2bd1535935c0

Simon, J. (2019), “Artificial intelligence: scope, players, markets and geography”, Digital Policy, Regulation and Governance, Vol. 21 No. 3, pp. 208-237.

Skraaning, G. and Jamieson, G.A. (2021), “Human performance benefits of the automation transparency design principle: validation and variation”, Human Factors: The Journal of the Human Factors and Ergonomics Society, Vol. 63 No. 3, pp. 379-401.

Sparks, B.A. and Browning, V. (2011), “The impact of online reviews on hotel booking intentions and perception of trust”, Tourism Management, Vol. 32 No. 6, pp. 1310-1323.

Sternberg, R.J. (2003), “A duplex theory of hate: development and application to terrorism, massacres, and genocide”, Review of General Psychology, Vol. 7 No. 3, pp. 299-328.

Stone, M., Aravopoulou, E., Ekinci, Y., Evans, G., Hobbs, M., Labib, A., Laughlin, P., Machtynger, J. and Machtynger, L. (2020), “Artificial intelligence (AI) in strategic marketing decision-making: a research agenda”, The Bottom Line, Vol. 33 No. 2, pp. 183-200.

Sun, Y., Shen, X.L. and Zhang, K.Z. (2023), “Human-AI interaction”, Data and Information Management, Vol. 7 No. 3, p. 100048.

Tanna, D., Dudhane, M., Sardar, A., Deshpande, K. and Deshmukh, N. (2020), “Sentiment analysis on social media for emotion classification”, Proceedings of the 4th International Conference on Intelligent Computing and Control Systems (ICICCS), IEEE, pp. 911-915.

Tarafdar, M., Tu, Q., Ragu-Nathan, B.S. and Ragu-Nathan, T. (2007), “The impact of technostress on role stress and productivity”, Journal of Management Information Systems, Vol. 24 No. 1, pp. 301-328.

Tõnurist, P. and Hanson, A. (2020), “OECD working papers on public governance”, OECD Working Papers on Public Governance, No. 44, pp. 1-146.

Troisi, O., Visvizi, A. and Grimaldi, M. (2023), “Digitalizing business models in hospitality ecosystems: toward data-driven innovation”, European Journal of Innovation Management, Vol. 26 No. 7, pp. 242-277.

Tu, Q., Wang, K. and Shu, Q. (2005), “Computer-related technostress in China”, Communications of the ACM, Vol. 48 No. 4, pp. 77-81.

Verma, S., Sharma, R., Deb, S. and Maitra, D. (2021), “Artificial intelligence in marketing: systematic review and future research direction”, International Journal of Information Management Data Insights, Vol. 1 No. 1, p. 100002.

Waldherr, A., Maier, D., Miltner, P. and Günther, E. (2017), “Big data, big noise: the challenge of finding issue networks on the web”, Social Science Computer Review, Vol. 35 No. 4, pp. 427-443.

Wang, P., Nair, M.S., Liu, L., Iketani, S., Luo, Y., Guo, Y., Wang, M., Yu, J., Zhang, B., Kwong, P.D., Graham, B.S., Mascola, J.R., Chang, J.Y., Yin, M.T., Sobieszczyk, M., Kyratsous, C.A., Shapiro, L., Sheng, Z., Huang, Y. and Ho, D.D. (2021), “Antibody resistance of SARS-CoV-2 variants B. 1.351 and B. 1.1. 7”, Nature, Vol. 593 No. 7857, pp. 130-135.

Wirtz, B.W., Weyerer, J.C. and Geyer, C. (2019), “Artificial intelligence and the public sector—applications and challenges”, International Journal of Public Administration, Vol. 42 No. 7, pp. 596-615.

Wu, W., Lyu, H. and Luo, J. (2021), “Characterizing discourse about covid-19 vaccines: a reddit version of the pandemic story”, Health Data Science, Vol. 2021, p. 9837856.

Yadav, A. and Chakrabarti, S. (2022), “Brand hate: a systematic literature review and future research agenda”, International Journal of Consumer Studies, Vol. 46 No. 5, pp. 1992-2019.

Yau, K.L.A., Saad, N.M. and Chong, Y.W. (2021), “Artificial intelligence marketing (AIM) for enhancing customer relationships”, Applied Sciences, Vol. 11 No. 18, p. 8562.

Yu, J., Aduragba, O.T., Sun, Z., Black, S., Stewart, C., Shi, L. and Cristea, A. (2020), “Temporal sentiment analysis of learners: public versus private social media communication channels in a women-in-tech conversion course”, 15th International Conference on Computer Science and Education (ICCSE), IEEE, pp. 182-187.

Zarantonello, L., Romani, S., Grappi, S. and Bagozzi, R.P. (2016), “Brand hate”, Journal of Product and Brand Management, Vol. 25 No. 1, pp. 11-25.

Zhang, Y., Zhang, J. and Sakulsinlapakorn, K. (2020), “Love becomes hate? Or love is blind? Moderating effects of brand love upon consumers’ retaliation towards brand failure”, Journal of Product and Brand Management, Vol. 30 No. 3, pp. 415-432.

Zhao, J. and Gómez Fariñas, B. (2023), “Artificial intelligence and sustainable decisions”, European Business Organization Law Review, Vol. 24 No. 1, pp. 1-39.

Further reading

Zuiderwijk, A., Chen, Y.C. and Salem, F. (2021), “Implications of the use of artificial intelligence in public governance: a systematic literature review and a research agenda”, Government Information Quarterly, Vol. 38 No. 3, p. 101577.

Acknowledgements

This research has been financed by Portuguese public funds through FCT – Fundação para a Ciência e a Tecnologia, I.P., in the framework of the project with reference UIDB/04105/2020.

Corresponding author

Paolo Popoli can be contacted at: paolo.popoli@uniparthenope.it

Related articles