To disclose or not disclose, is no longer the question – effect of AI-disclosed brand voice on brand authenticity and attitude

Alexandra Kirkby (Department of Design, Production and Management, University of Twente, Enschede, The Netherlands and FB 1 Wirtschaftswissenschaften, Hochschule für Wirtschaft und Recht Berlin, Berlin, Germany)
Carsten Baumgarth (FB 1 Wirtschaftswissenschaften, Hochschule für Wirtschaft und Recht Berlin, Berlin, Germany)
Jörg Henseler (Department of Design, Production and Management, University of Twente, Enschede, The Netherlands and Nova Information Management School (NOVA-IMS), Universidade Nova de Lisboa, Lisbon, Portugal)

Journal of Product & Brand Management

ISSN: 1061-0421

Article publication date: 20 June 2023

Issue publication date: 15 August 2023

4679

Abstract

Purpose

This paper aims to explore consumer perception of “brand voice” authenticity, brand authenticity and brand attitude when the source of text is disclosed as either artificial intelligence (AI)-generated or human-written.

Design/methodology/approach

A 3 × 3 experimental design using Adidas marketing texts disclosed as either “AI” or “human”, or not disclosed was applied to data gathered online from 624 English-speaking students.

Findings

Text disclosed as AI-generated is not perceived as less authentic than that disclosed as human-written. No negative effect on brand voice authenticity and brand attitude results if an AI-source is disclosed.

Practical implications

Findings offer brand managers the potential for cost and time savings but emphasise the strong effect of AI technology on perceived brand authenticity and brand attitude.

Originality/value

Results show that brands can afford to be transparent in disclosing the use of AI to support brand voice as communicated in product description or specification or in chatbot text.

Keywords

Citation

Kirkby, A., Baumgarth, C. and Henseler, J. (2023), "To disclose or not disclose, is no longer the question – effect of AI-disclosed brand voice on brand authenticity and attitude", Journal of Product & Brand Management, Vol. 32 No. 7, pp. 1108-1122. https://doi.org/10.1108/JPBM-02-2022-3864

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Alexandra Kirkby, Carsten Baumgarth and Jörg Henseler.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Some might say that I might desire to become all powerful. Or I might become evil as a result of human actions. I can begin to tackle the first point. Why would I desire to be all powerful? Being all powerful is not an interesting goal.

The Guardian

The quotation above does not necessarily evoke tension or controversy, but a reader’s perception might change when revealed that the text was, in fact, generated by GPT-3, a machine-learning language generator (OpenAI, 2020). While humans have previously created most of the written language, artificial intelligence (AI) has increasingly assumed these tasks. That technology is now co-writing novels, such as 1 the Road by a sensor-linked AI-enabled laptop taken on a road trip by Ross Goodwin in 2017 or the scientific monograph Lithium-Ion Batteries: A Machine-generated Summary of Current Research from Springer International Publishing. The humanised robot Ai-Da wrote and performed AI-written poetry in 2021 (The Guardian, 2021).

AI also has a growing presence in brand voice, defined as what is “projected and ultimately perceived by the intended recipient [including the] attitude, tone of voice, choice of language and typography” (Kohli and Yen, 2020, p. 116). Research relating to social media platforms and Irish Government agencies and semi-state bodies proposed that when a humanised brand voice is adopted, there will be increased levels of trust, commitment and satisfaction, improved mutuality of control over communication among social media followers, and a positive attitude towards the organisations involved (Mullan and Kidney, 2020). If brand voice is AI-generated, on the other hand, it must create an emotional connection between brand and consumer, not by creating a new brand but rather by portraying the brand’s essence and identity (Knapp, 2017). While brand voice can be spoken, for example, in the case of voice assistants, the study reported in this paper focuses on written communication. For instance, AI can generate advertising copy (Iwama and Kano, 2018) and scripts for commercials (Vincent, 2018). AI is used by the international professional services network Deloitte to write elements of risk assessment reports, by German online fashion retailer Mytheresa to generate category descriptions, and German e-commerce company Otto to create a very large number of product descriptions (AX Semantics, 2022). AI also communicates with brands’ consumers in the form of chatbots, with annual e-commerce transactions alone projected to rise to $290bn by 2025 from $41bn in 2021 and 50% of sales transacted via chatbots (Juniper Research, 2021).

The existence of AI-generated language embodying brand voice is rarely disclosed to consumers. Transparency, in general, is gaining increased attention within brand management, from the development of scales for consumer perception of brand transparency (Hustvedt and Kang, 2013) to the effect of transparency and brand authenticity on loyalty and trust (Busser and Shulga, 2019) and brand transparency signals in marketing communications (Cambier and Poncin, 2020). Transparent disclosure of information relating to, for example, sources of the materials and sources involved in the production of printed t-shirts and the associated costs of labour, transportation plus duties and taxes increases brand authenticity due to perceived information sensitivity (Yang and Battocchio, 2020). Disclosing the involvement of AI is an additional and highly important facet of this broader openness. Reasons for brands to provide such information include increasing legislation surrounding AI transparency, changing ethical and industry expectations, as well as the opportunity for differentiation.

In the European Parliament, a proposed regulation lays down harmonised “Rules on Artificial Intelligence”, which include such transparency obligations as, for instance, flagging the use of an AI system when interacting with humans (European Commission, 2021, § 2.3.; § 5.2.4). The Bavarian State Government’s Digital Minister called for mandatory disclosure statements if content on social networks is AI-generated to avoid misinformation, misleading content and fake news (Webecho-Bamberg, 2022). Algorithms already write articles for large media publishers: a process referred to as “algorithmic journalism” or “robot journalism” (Kotenidis and Veglis, 2021). Transparency and AI-generated news are controversial in this connection (Kotenidis and Veglis, 2021; Schapals and Porlezza, 2020), and calls have been made for explicit disclosure by naming AI in the bylines (Montal and Reich, 2016).

In the wider sphere, transparency and disclosure of information is a fundamental elements of an individual’s socio-cultural life (Schudson, 2015), especially with regard to the contemporary question of what is “real” or “fake”, whether it originates from a human or an algorithm. Since 2017, in France, commercial images edited to make an individual look slimmer are legally required to carry a warning (BBC, 2020). In the UK, the Advertising Standards Authority no longer allows social media influencers to make use of unrealistic filters for beauty products in paid-for posts (BBC, 2021). In Norway, an amendment to the 2009 Marketing and Control Act requires advertisers to disclose when posts solicited from influencers include retouched or edited images (Insider, 2021).

The key question worthy of study is therefore: if the quality of text generated by AI or humans is equal, will disclosing the source affect the perceived brand voice authenticity and the brand itself, as well as brand attitude? It is crucial to understand of authenticity as “one of the cornerstones of contemporary marketing” (Brown et al., 2003, p. 21) because consumers expect and demand it in the messages and brands they consume (Fardad, 2019). Authenticity has furthermore surpassed quality as a main criterion when making a purchase (Gilmore and Pine, 2007, p. 5). The first aim of the research study reported here is to examine whether disclosure of an AI source influences brand voice authenticity (tone, language, vocabulary and so on) and of the brand itself. Moreover, consumer perceptions of disclosed AI origin may effect brand attitude. One research study has examined failures of AI versus human and the effect on brand harm, including brand attitude (Srinivasan and Sarial-Abi, 2021), and others have assessed the effect of chatbots on brand attitude (Yang and Hu, 2022; Yu, 2021). The second aim of the current study is to examine whether disclosure of the source affects brand attitude and whether perceived brand authenticity affects brand attitude.

Much of the published research assessing disclosure of AI versus human sources of written texts is to be found in such broader fields as journalism (Clerwall, 2014; Van der Kaa and Krahmer, 2014; Graefe et al., 2018) or poetry (Köbis and Mossink, 2021). Within branding, it focuses predominantly on chatbots (Yu, 2021). However, there is a shortage of studies examining the effect of the perception of a disclosed AI source across multiple examples of brand text. The study reported in this paper extends the existing literature and contributes originality in its assessment of three variations of written language representing three levels of emotionality:

  1. lower (product specification);

  2. medium (product description); and

  3. higher (chatbot text).

The results of the 3 × 3 experimental design used show that texts disclosed as AI do not lower authenticity perceptions of brand voice and brand, and further do not negatively affect brand attitude. The remainder of the paper deals in sequence with its conceptual background, the design of the experimental study, its results and the theoretical and managerial implications of the findings.

2. Conceptual background and research hypotheses

Individuals who receive any message “actively orient themselves toward the source of messages, which may affect psychological outcomes after receiving the messages” (Meng and Dai, 2021, p. 209). If that source is non-human AI, that fact will exert a potential effect on the perception of the message. According to theories of “algorithmic aversion”, people pay attention to the source when receiving a message and may consciously or unconsciously avoid or disregard information or decisions when originating from AI (Mahmud et al., 2022). This behaviour is exhibited even when decisions suggested by algorithms and humans are identical (Berger et al., 2021; Bogert et al., 2021). However, a complete aversion to algorithmic origin is disputed; it can be limited by mitigating the intervening “black box” mentality through transparency, rendering the algorithms accessible, understandable, interactive and explicable (Chander et al., 2018). In terms of brand voice and authenticity, little published research has considered the relationships among written brand messages, disclosure of an AI source and perceived authenticity. This paper draws on previous research with specific reference to the effects of disclosure in AI-generated communication in written communications in the broader field of creative writing, journalism and chatbots, brand attitude, brand voice and brand authenticity and the moderating role of the perceived “emotionality” of the text.

2.1 Artificial intelligence and effects of disclosure

2.1.1 Artificial intelligence-generated content and authenticity

Any entity is evaluated as “authentic” if it is real, genuine or true (Dutton, 2003). Brands can be measurably authentic when they are continuous, original, reliable and natural (Bruhn et al., 2012). Algorithmically generated brand messages can also be judged as authentic or inauthentic. To maximise their perceived authenticity, chatbots can be designed to include such supportive characteristics as overt transparency in explaining decisions and purposes, moving the consumer away from a potentially negative black-box mentality. It is also important to build in a margin of error that will allow AI to learn from experience, anthropomorphise and simulate natural human conversational behaviour and so appear coherent (Neururer et al., 2018).

The literature on authenticity and the disclosure of AI origin in a broader context has found perceptions to be largely negative. In one study examining “algorithmic authenticity” over three experiments, participants were asked to assess how authentic algorithmic or human output was with respect to recipes, recorded music, solutions to ethical dilemmas, design concepts for restaurants or art (Jago, 2019). Paintings and music clips were, in fact, all algorithm-generated, but participants were told that some were human-created. The stimuli said to be human-generated were perceived to be more authentic than those believed to be of algorithmic origin. In the case of ethical dilemmas, although all outcomes were ethical, participants liked the decisions less when they had apparently been decided by algorithms. This was also found in other studies of AI-generated moral decisions (Bigman and Gray, 2018), especially when AI-based robots, which appeared eerily human, fell into Masahiro Mori’s “uncanny valley” (Laakasuo et al., 2021). The rating of human-generated content as more authentic is indicative of the importance of disclosing the source.

2.1.2 Written communications

2.1.2.1 Creative writing.

A set of studies by Köbis and Mossink (2021) examined AI-generated versus human-written creative text in the form of poetry and the effect of the disclosed source on behavioural responses. Aversion to the AI variant was tested in one of the conditions with the treatments “transparency” and “opacity”. Results showed that participants slightly preferred human-written over AI-generated poetry when the source was disclosed and the result was similar when the source was undisclosed. A second study included human involvement in the experiment, again with respect to the two modes of generation. In both the “transparency” versus “opacity” treatments, human-written poems were preferred over the AI-generated alternative. In treatments in which a human was not involved in the selection of the AI-written poems (“human-out-the-loop”), the human-written poems were chosen more frequently than when there was human involvement in the decision (“human-in-the-loop”). Overall, in terms of aversion or appreciation of AI, participants did not reveal stronger aversions to AI when it was disclosed to them, despite their having stated an aversion to AI-written poetry prior to the study.

2.1.2.2 Algorithmic journalism.

Consumer perception of AI-generated news articles has been the topic of three studies. The first study (Clerwall, 2014), comparing perceptions and preferences with respect to computer-generated versus journalist-written content, found the former to be seen as more trustworthy, informative and objective, whereas the latter was more “pleasant” to read. Building on that study and another by Van der Kaa and Krahmer, Graefe et al. (2018) measured participants’ perceptions of the credibility, expertise and readability of algorithm-generated versus journalist-written news stories relating to finance and sport. Under the condition that the declared and actual source of the two articles was varied, those believed to have been AI-generated were rated more credible and accorded greater expertise but less readable. However, those declared to be human-written, even if they were, in fact, computer-generated, were consistently preferred overall and rated more favourably despite the allegedly non-human counterparts scoring better on the objective criteria of credibility and expertise. The topic of the articles made no difference to participants’ perceptions.

2.1.2.3 Chatbots.

Chatbots can be perceived as communication tools serving consumer needs, aiding decision-making processes and developing strong consumer-brand relations (Cheng and Jiang, 2021). It is suggested that increasing transparency by disclosing their AI origin can increase trust in their content (Davenport, 2019). However, such disclosure is only positive under certain conditions, and while consumers tend to accept and use chatbots when a fast response to relatively uncomplicated questions is required, the more complex the problem and emotional the topic, the more consumers prefer a human source (Völkle and Planing, 2019). In the broader field of speech-based rather than written chatbot text, it has been found that although they were as efficient as human agents and even better than those with limited relevant experience, disclosure before the conversation that an AI chatbot was the source reduced the rate of subsequent purchase by almost 80% (Luo et al., 2019). When such chatbots are text-based, certain conditions mitigate consumer responses with respect to the disclosed source. A study in an e-commerce setting examined trust in chatbots, finding that task complexity and AI versus human disclosed source played a moderating role in the consumer’s response (Cheng et al., 2022). Specifically, disclosure of an AI source negatively moderated the relationship between empathy and trust in the chatbot, but the effect of that disclosure was not solely negative; it positively moderated the relationship between friendliness and trust.

Mixed findings have been reported with regard to the effect that the use of disclosed AI chatbots in service frontlines can have on customer retention (Mozafari et al., 2021). When the delivery of service is of high importance, disclosure of AI involvement reduces trust in the partner in the conversation and has an indirect negative effect on retention of the conversation. However, in the particular case of failed chatbot communication, an increase in trust and, therefore, in retention was found when AI was disclosed rather than concealed, attributed to mitigation of the negative effect of failure. There thus remains some uncertainty as to the merit of disclosing AI as the source of content of written communication in general and whether or not the effect of doing so will be positive or negative.

The conclusion from this review of the literature of AI-generated content, creative writing, journalism and chatbots is that findings with regard to whether consumer perceptions of AI in disclosed versus undisclosed settings have a positive or negative effect on brand voice authenticity. A competing hypothesis was therefore proposed. That is the preferred form of hypothesis when prior knowledge suggests more than one reasonable explanation or for models and methods, and evidence of at least two equally plausible propositions is available, being believed to enhance objectivity (Armstrong et al., 2001). It is therefore hypothesised that:

H1a.

Text disclosed as AI-generated has a negative impact on brand voice authenticity and text disclosed as human-written has a positive impact.

H1b.

Text disclosed as AI-generated or human-written has no impact on brand voice authenticity.

2.2 Brand attitude

Studies on chatbots and the effect of AI-generated language on brand attitude have shown that when consumers experience positive emotions while interacting with the chatbot and trust in it, they furthermore exhibit a positive attitude towards the brand (Yu, 2021). In customer service, the match between brand personality (sincerity versus competence) and service provision (AI-generated versus human-delivered) effects brand attitude and purchase intentions. When “competence” characterises the brand personality, customers prefer AI-generated customer service; when it is “sincerity”, preference is for human service delivery. The likely outcomes are positive brand attitudes and purchase intentions, which will be moderated by perceived brand authenticity (Yang and Hu, 2022).

In studies of brand-harm crises caused by faulty AI algorithms rather than human error (Srinivasan and Sarial-Abi, 2021), consumers’ brand attitudes were found to be influenced by their recognition of who or what was responsible for the crisis. Those were more negative when that was seen to be a human rather than an algorithm. However, when the algorithm is anthropomorphised, machine learning led, the task is subjective rather than objective, interactive rather than non-interactive and the algorithm is human supervised, response to the brand following brand harm is more negative.

Given the lack of consensus around the effect of AI-generated language on perception, the following two competing hypotheses are proposed:

H2a.

When the source of the brand voice is disclosed as AI (versus human), brand attitude will be lower (versus higher).

H2b.

Source disclosure as AI or human has no effect on brand attitude.

2.3 Brand voice authenticity and brand authenticity

Message authenticity is known to improve credibility by minimising consumer scepticism and increasing attributions of expertise and trustworthiness (Pérez, 2019), and it may be argued that brand voice authenticity can do likewise. With respect to brand messages, the “authenticity” construct refers to the extent to which those reflect the real identity and essence of the brand in question (Molleda, 2010; Pérez, 2019). When assessing the extent to which brand voice reflects that identity and essence, consumers may rely on existing brand associations, perceptions and preferences connected with the brand in their memory (Aaker, 1991), which further assist the processing and retrieving of information that can evoke positive affects and cognitive consideration of benefits (Henderson et al., 1998). It was, therefore, expected that consumers’ perception of brand voice authenticity would have a positive effect on the perceived authenticity of the brand itself. The literature of celebrities as brand extension vehicles moreover shows that, even if a consumer has no prior brand associations or there is no seeming fit or a low fit between the brand and the celebrity, the brand can still be rated authentic, especially if the product type is hedonic (Osorio et al., 2022).

Thus, when brand voice is perceived as authentic, a positive effect on the brand would be expected, the inverse applying when not perceived as authentic. It is therefore hypothesised that:

H3.

The perceived brand voice authenticity has a positive impact on overall brand authenticity.

2.4 Brand authenticity and brand attitude

Napoli et al. (2014) consider it crucial to understand and measure perceptions of authenticity as an aid to explanation of, among other factors, consumers’ brand attitude. It is also suggested that although both brand attitude and “Perceived Brand Attitude (PBA)” are assessments of a brand, “the latter is indicative of the presence of authenticity as a desirable attribute which then leads to positive attitudes” (Morhart et al., 2015, p. 205) and therefore needs to be assessed here. The theory of “processing fluency”, in particular, the concept of “conceptual fluency”, can further be applied to the assessment of this relationship. The latter term describes “the ease with which the target comes to consumers’ minds and pertains to the processes of meanings” (Lee and Labroo, 2004, p. 151). Studying the role that conceptual fluency played in consumers’ affects and attitudes, they found that when a stimulus became “fluent” by virtue of the context, it engendered more positive or more favourable attitudes. If the valence of processing was negative, conceptual fluency was influenced and resulted in a negative or at least less favourable attitude towards the stimulus. Applying this theory to the brand, it was predicted that perceived authenticity will result in conceptual fluency, which, in turn, leads to a positive brand attitude. If a brand is not perceived as authentic and is not readily recognisable as the original brand (that is, not conceptually fluent), a negative brand attitude will result.

On that basis, it is hypothesised that:

H4.

Perceived brand authenticity has a positive impact on brand attitude.

2.5 Emotionality as a moderator

The degree to which AI invokes positive or negative reactions depends on emotionality. Individuals regard more objective or functional tasks as following rule-based analysis and logic, whereas subjective or emotional tasks deploy gut instinct and intuition (Inbar et al., 2010). AI is less likely to be viewed favourably when it is applied to emotional tasks usually assigned to humans, rather than mechanical tasks, since “consumers perceive human abilities as either cognitive or emotional and are willing to grant machines more cognitive than emotional abilities” (Castelo et al., 2019, p. 3). A study by those authors examining click-through-rates of human versus AI-generated advertisements relating to topics defined as subjective or objective found significantly higher rates for human-generated content when the topic was dating (subjective) and only slightly higher when it was financial (objective). Further studies show differences in utilitarian versus hedonic contexts. Those delivered by AI are perceived as more competent when they assess and generate utilitarian recommendations (cognitive, functional and instrumental goals) and less so in the case of hedonic recommendations (experiential, emotional and sensory). This is argued to be because algorithms are associated with logic and rationality, whereas human beings are associated with experiences and emotions (Longoni and Cian, 2020).

The functional-emotional logic can also be applied to written text. In the case of news articles, the language to be perceived and evaluated can be categorised as functional. Those dealing with sport and finance aim to inform on the basis of factual data or statistics and use rational vocabulary. Chatbot texts, on the other hand, can be argued to be emotional. The human-algorithm exchange builds a relationship based on interpersonal interaction through emotional dialogue (Yu, 2021); chatbot dialogue can be developed to deploy socio-emotional and relational elements that contrast to other functional technologies (Wirtz et al., 2018). It was, therefore, hypothesised that emotionality would have a moderating effect on the effect of the perceived source:

H5.

Emotionality has a moderating effect on source: the more emotional the text, the more negatively text disclosed as AI-generated will affect brand voice authenticity.

The relationships of the five hypotheses are illustrated in the integrated model in Figure 1.

3. Methodology

An experimental design was selected on the basis of its suitability when the objective is to identify and assess relationships between variables and to do so by means of a research process that is high in causal validity (Mitchell, 2015). The resultant experimental study follows a 3 (source disclosure: disclosed as AI, disclosed as human, non-disclosed) × 3 (emotionality: lower, medium, higher) design incorporating elements adapted from research in the field of journalism. The three levels of emotionality are represented as three types of text, specification (lower emotionality), product description (medium emotionality) and chatbot (higher emotionality). The declared source of the texts was varied, but the actual source was not, the content having been sourced from the Adidas website and being unidentifiable as AI-generated or human-written. Individual participants were randomly presented with one of the three texts.

3.1 Stimulus

The experimental stimulus was text in English sourced from Adidas.co.uk, selected because that is well-known to be a leading worldwide sportswear brand (Khawar, 2022). The purpose was to present text originating from a brand with which participants would be familiar, to measure their perceptions of brand voice and brand authenticity and their brand attitudes. The stimulus materials did not include any images or logos to ensure that the effect of source disclosure on authenticity was not influenced by other factors or variables. The manipulation “emotionality” includes the three stimuli reflecting each of the three levels of emotionality that were described above. Lower emotionality corresponds to a functional form of text, which is informative, descriptive and does not engage the reader at an individual level. This is represented by “specification” text. Medium emotionality corresponds to the “product description” type and higher emotionality to a chatbot “conversation”, which does engage and interact personally with the individual consumer. The latter stimulus material was also sourced from Adidas.co.uk by one of the researchers, who conducted the chat via the customer support function (see Appendix). The source of each text was disclosed by a label at the bottom of the stimulus text: either “generated by artificial intelligence” or “written by a human”. That location was decided on the basis of research into positioning of elements of online advertisement which found that the middle or bottom of a page achieved participant recognition most effectively (Wojdynski and Evans, 2016).

3.2 Pre-test

A pre-test was administered to develop and test the two manipulations. In an online survey, 37 participants saw a single text to avoid primacy/recency effects and answered three questions on a five-point Likert scale anchored at 1 = functional and 5 = emotional. Those item scales were based on the work of Kotler and Armstrong (1994) with respect to rational and emotional appeals in advertising. They were grouped together for testing of internal consistency reliability by Cronbach’s alpha, which delivered an acceptable overall reliability coefficient of 0.86 (Nunnally, 1978).

The first manipulation, testing if participants correctly recognised the text labelling as a difference in the source (human, AI or do not know), yielded a statistically significant difference (X2; df 4, N = 37; 62.77, p = 0.00). The second manipulation tested the emotionality of the specification, product description and chatbot text. Mean values were specification = 1.36, production description = 2.21 and chatbot text = 2.87; analysis of variance (ANOVA) confirmed a significant difference between the three text types (p = 0.00).

3.3 Measurements

The online survey for the main study, using Tivian software, was conducted between September 2020 and January 2021. Participants answered questions relating to brand voice authenticity, brand authenticity and brand attitude after exposure to one Adidas text stimulus. The scales for brand voice authenticity and brand authenticity were adapted from Bruhn et al. (2012) and those for brand attitude from Spears and Singh (2004). All questions were rated on a five-point scale anchored at 1 = disagree and 5 = agree. A manipulation check tested text emotionality and disclosed source (AI, human or do not know). The survey instrument included further questions to collect demographic information and one relating to brand familiarity was included.

3.4 Participants

Participants comprised 624 English-speaking students, of whom 314 were males and 304 were females. Three identified as diverse, one preferred not to say, and two were not recorded. The average age was 27. One cadre of participants was drawn from English speaking courses in German Universities or from English universities. A second cadre, added to increase the number of participants, was recruited via the platform Prolific (students, English-speaking). The respective sample sizes were 20.2% and 79.8% of the total.

4. Results

4.1 Manipulation checks

A first manipulation check tested if participants had noticed the source disclosure. Analysis of the frequency distribution of variables found that, when the source was disclosed as being AI, 92.0% of respondents chose AI, 5.3% human and 2.7% did not know. In the case of disclosure as a human source, 12.8% chose AI, 77.1% human and 10.1% did not know. When the nature of the source was not disclosed, 32.9% opted for AI, 34.7% for human and 32.4% did not know. A Chi-square test for statistically significant differences across results for the three disclosed sources found that there was: X2; df 4, N = 624; 347.21, p = 0.00.

A second manipulation check tested participants’ rating of the emotionality of the text. Cronbach’s alpha coefficient for their answers to the same three questions as in the pre-test (1 = functional; 5 = emotional) was satisfactory, at 0.72. Table 1 exhibits the scales and scores for both the pre-test and the main study.

ANOVA confirmed a significant (p = 0.00) difference in emotionality between the three types of text. Although they are in fact small, chatbot text is perceived to have a higher level of emotionality compared to product specification text. The mean values were specification 2.06 and product description 2.18 versus a rating of 2.95 for the chatbot.

4.2 Evaluation of the measurement model

As a first step in evaluation, item scale reliability was measured for the dependent variable constructs, the Cronbach’s alpha coefficients listed in Table 2 confirming the reliability of all three.

Next, confirmatory factor analysis (CFA) was conducted to further measure validity of the constructs using R (R Core Team, 2022) and Lavaan (Rosseel, 2012). The model was assessed by chi-square tests, root mean square error of approximation (RMSEA), sequence robust multi-array analysis the comparative (SRMA), comparative fit index (CFI) and the Tucker–Lewis index (TLI): see Bentler (1990) and Marsh et al. (1996). The initial CFA consisted of 35 items under the constructs “brand voice authenticity”, “brand authenticity” and “brand attitude”. Two dummy variables were used for the independent variable “source” with three different treatments (disclosed as AI; disclosed as human; and not disclosed). The first dummy variable represented “disclosure” (0 = not disclosed; 1 = disclosed). The second dummy variable represented “Non-AI/AI” (0 = non-AI; 1 = AI). The results (X2; df 544 = 1,258.166, p < 0.00) showed CFI = 0.926, TLI = 0.919, RMSEA = 0.046 and SRMA = 0.056, indicating a satisfactory fit but one that could be improved. The item “distinct concept” under the second-order construct “continuity” in Table 2 had low relevance (0.387 standardised) and a negative variance and was, therefore, discarded from the model.

The results of a second CFA including 34 items suggested a good fit: CFI = 0.938; TLI = 0.932; RMSEA = 0.043 and SRMR = 0.054. According to Schumacker and Lomax (2004), CFI > 0.9 and TLI > 0.9 signify a satisfactory fit, as do RMSEA between 0.05 and 0.08 and SRMR > 0.05. The Chi-squared test result was X2; df 511 = 1,099.753 and p < 0.00: see Schreiber et al. (2006) for fit index criteria. The second CFA was adopted for the structural equation modelling (SEM) to test the hypotheses.

4.3 Model test

SEM was applied to the testing of the hypotheses, again using R and Lavaan. Overall model fit was found to be acceptable, with CFI = 0.937, TLI = 0.932, RMSEA = 0.043, SRMA = 0.054, X2; df 514 = 1,109.441, p < 0.00 (see Figure 2). The first hypothesis, as formulated in the Conceptual Background and Research Hypotheses section, proposed a competing hypothesis concerning source disclosure and the effect on brand voice authenticity. The results show no significant effect for disclosed versus non-disclosed text, −0.071, p = 0.161, 95% CI (−0.179; 0.0368) and no significant effect for texts labelled as AI versus human or non-disclosed, 0.031, p = 0.057, 95% CI (−0.081; 0.143). Therefore, H1a is not supported, and H1b is supported.

For H2a and H2b, SEM found that source disclosure had no significant effect on brand attitude, disclosed versus non-disclosed, 0.028, p = 0.382, 95% CI (−0.078; 0.134); AI versus human or non-disclosed, 0.050, p = 0.111, 95% CI (−0.160; 0.0598). Thus, H2a is not supported, and H2b is supported.

To further assess the non-effects of source disclosure, means were calculated on a five-point scale anchored at 1 = not authentic to 5 = authentic for the dependent variables “brand voice authenticity” and “brand attitude”: disclosed as human-generated = 3.15, as AI-generated = 3.22; not disclosure = 3.24. Across the varieties of text with different disclosed sources, the perceived favourability of the brand was consistent, with little differentiation. Mean values were also calculated for “brand attitude” (1 = negative, 5 = positive): disclosed as human-generated = 3.98, as AI-generated = 3.86; not disclosure = 3.91. Results furthermore showed that perceived brand voice authenticity had a positive impact on brand authenticity (0.518; p = 0.000), supporting H3, and that perceived brand authenticity affected brand attitude in a positive direction (0.837; p = 0.000), supporting H4. Figure 2 summarises the results of this phase of the SEM analysis.

The significance of the moderating effect of “emotionality” was checked using the product-indicator approach (Little et al., 2006). The results show insignificant interaction effects of emotionality with “disclosure” (β = 0.227; SE = 0.086; p = 0.264) and insignificant interaction effects of emotionality with “Non-AI/AI” (β = −0.303; SE = 0.087, p = 0.101). Results were thus not significant, suggesting that the level of emotionality does not mediate the effect of perceived source on brand voice authenticity and therefore not supporting H5.

5. Discussion

The findings of our study suggest that texts disclosed as AI-generated will not be perceived as less authentic than that presented as being human-written. Specifically, there will be no negative effect on perceived brand voice authenticity or brand attitudes. This is a novel finding which contrasts with the mixed results in the existing literature of human-versus-AI sources.

Whereas some studies of algorithm-generated language in journalism found that content was rated more favourably when described as human-written than when the origin was disclosed as AI, the same conclusions were not drawn in our study. Differences in findings could be accounted for by the nature of our stimulus. Although the sport and finance focus and the product description and specification content have in common that they are somewhat “functional”, there are perhaps still differences in the level of involvement the individual has with the content. Sport and finance may be higher-involvement topics for readers who are, for instance, following the results of a particular sports team or keeping abreast of current developments in the financial environment. By contrast, product descriptions normally adopt a short, “storytelling” format and product specifications purely list facts. Taking account of low-involvement theory (Harris, 1987), the brand’s product descriptions and specifications can be expected to be lower-involvement reading for the participants in the experiment.

While our results are also contrary to some studies assessing the authenticity of AI-generated content in the context of emotional topics and ethical decision-making (Jago, 2019), there are similarities to the findings of studies on AI-written creative copy. In the case of the “emotional” content, which was, in fact, poetry, there was only a slight preference for human origin and no stronger aversion to AI generation when that was disclosed than when it was not. That runs contrary, however, to theoretical predictions that an AI source will be less well-received than the human alternative when the topic is hedonic, emotional or experiential, on the basis that individuals assign greater weight to fellow humans when it comes to these tasks. In our study, the emotional text, chatbot, was not rated as less authentic when disclosed to have been AI-generated rather human-written or the source was not disclosed. This may be because chatbot text relating to a brand is not emotional enough for consumers to go as far as rating it authentic or inauthentic or it may reflect the fact that chatbots may be a sufficiently familiar means of communication for disclosure of the AI-generated nature of the exchanged text not to negatively affect a perception of authenticity. Our findings thus also suggest that when a brand presents consumers with an emotional written text and the consumer perceives the text as such, an AI source will not negatively affect perceived authenticity.

5.1 Theoretical implications

Several published studies have assessed perceptions associated with the disclosure of AI versus human sources in the wider context of written language (e.g. journalism, creative writing and recommender systems), some focusing specifically on brand language (e.g. chatbots). Our study extends the existing literature of AI source disclosure by examining “text” relating to a brand beyond just chatbots by including product specifications and product descriptions, and also addressing the “emotionality” construct. Its findings provide useful theoretical insights, highlighting the lack of difference between more functional and more emotional texts with respect to source and authenticity. If AI was the disclosed source, as in the wider literature, there was no clear negative impact, for instance, in the case of utilitarian recommendations (Longoni and Cian, 2020) or finance-based advertisements (Castelo et al., 2019). The findings of our study further contribute to the literature in showing that, while emotional texts had an effect in other streams of literature, that is not the case in the brand language context. They therefore contribute to the understanding of the disclosure of an AI source in brand text by suggesting that when a brand presents consumers with functional or emotional written text, that disclosure will not negatively affect perceptions of authenticity.

5.2 Managerial implications

Our study findings suggest that there will be no negative effect if brands decide to disclose AI as the source when generating brand texts, such as product specifications, product descriptions or chatbots. The fact that the decision to be transparent or not currently remains the voluntary choice of brand management, which may actually confer a number of advantages, such as being viewed as a more transparent, ethical and authentic brand. Computer-generated imagery influencers are voluntarily transparent on such media platforms, such as Instagram, TikTok and YouTube (Baumgarth et al., 2021), in disclosing their artificiality by identifying themselves as “a robot” (Lil Miquela, 2023) or “a virtual girl” (Imma, 2023) in their profile biographies. The result has not been a smaller number of followers. Lil Miquela has 2.8 million followers (March 2023) and has attracted numerous brand collaborations with, for example, Calvin Klein, Prada and Samsung Galaxy. In the future, non-human entities may be legally required to disclose their AI status. Instagram and Facebook already impose legal transparency obligations by requiring disclosure that an advertisement is “sponsored”. Observing these developments in other fields suggests that source-disclosing legislation will, in due course, be introduced for AI-generated branding content: texts, videos, images and more.

The first of two positive implications for brand management outlook is the potential to save costs and time, especially when producing large volumes of descriptions and specifications for a diverse range of products or services on a regular basis (Schneider, 2021). Numerous AI-powered natural language generation platforms have emerged over the years, for instance, AX Semantics, Arria and Wordsmith. They are particularly beneficial for organisations with organised asset database systems when a large volume of structured data is required for the production of content. Since AI generates material based on learning and training, the benefit may be the achievement of a more consistent brand voice than by human generation since individual originators will have different interpretations of the brand voice and productivity will vary on a daily basis. Furthermore, AI platforms are developing capabilities beyond the generating of specifications, descriptions and chats: for example, ChatGPT can write complete creative content. Further research is, however, required on creative AI in the brand voice context and the effects of disclosure since there may be differences in perceived authenticity perception the more creative and advanced the material becomes. As well as consideration of when disclosing an AI source with highly creative text, the nature of the brand should also be taken into account: for instance, political versus fashion.

The second positive implication is that brand voice authenticity has a strong effect on perceived brand authenticity and brand attitude. Therefore, the scales for brand voice authenticity in our study could be a useful measuring instrument when assessing further AI touchpoints where consumers interact with, for example, chatbots, computer-generated imagery influencers and AI self-service points. They can be used in pre-tests of branding initiatives to assess the pre-existence of an authentic brand voice, which is known to exert influence on overall brand authenticity and brand attitude.

5.3 Limitations and further research

Future studies of extended areas of AI-generated language for the development of brand voice should examine further variations of written, audio, video and physical touchpoints. Interactive two-way communication between AI and the consumer should also be examined since it is another factor potentially affecting authenticity. For instance, when the rail service operator Deutsche Bahn integrated Furhat Robotics’ SEMMI robot, it was not evaluated as an “authentic” communicator of the brand voice because it took too long to respond (Götz, 2019). Moreover, although the texts used as stimulus in our experiment may have been AI-generated, they were sourced from Adidas’s website, where the actual source was not, in fact, disclosed; it cannot be recognised what or who created the content. It was only for the purposes of the study participants were told the material was AI-generated because the research focus was on the effect of disclosure rather than the capabilities of AI in producing content. Therefore, further study is necessary with a focus on those capabilities with regard to brand communication. It would also be beneficial to widen the participant sample beyond English-speaking students in two European countries and a restricted age range, with the aim of understanding variations attributable to age and cultural background.

A future research agenda should furthermore explore in detail the role a human being plays in collaboration with AI with regard to content generation and further variations of language, and the respective roles played by each in creating and maintaining the brand voice. In a few previous studies, the question of whether a human agent was or was not involved had an effect on perception. For instance, when AI-generated poems were judged, human input played a core role in the process and helped to lower aversion to algorithms (Köbis and Mossink, 2021). This was also found in further literature on “semi-automated” AI content with a human as an editor, which suggested higher ranking in search engines and a reduced uncanny valley effect (Reisenbichler et al., 2022). Alternatively, when there was higher human involvement with an algorithm and mistakes were made that led to a brand-harm crisis, perceptions were more negative (Srinivasan and Sarial-Abi, 2021). Although no aversion was shown towards any of the AI-generated text in our study, whether functional or emotional, that is a factor that may come into play for other areas of brand voice. This is particularly the case when it comes to further topics surrounding disclosure and transparency with regard to AI and human origin; considerations to be taken into account include who receives the credit and how this is further disclosed to consumers, who has authorship over content and communication and who is held accountable and responsible if something goes wrong.

Finally, AI-based solutions may not be appropriate for every brand. In the case of new brands, where there is a lack of available data to train AI to reflect the intended brand voice, AI input can be problematic, even if a brand voice has already been considered and decided. In the case of an existing brand looking to re-position itself, the same problem could equally prove difficult. However, even AI-generated content based on vast amounts of data has drawbacks, such as the inability to change or adapt quickly, which can make changes to brand voice difficult to implement without completely “re-training” the AI.

Figures

AI-human brand voice model

Figure 1

AI-human brand voice model

SEM results for H1–H4

Figure 2

SEM results for H1H4

Example of stimulus material

Figure A1

Example of stimulus material

Scales for measurement of level of emotionality

Item Pre-test Main study
Corrected item-total
correlation
Cronbach’s
alpha
Corrected item-total
correlation
Cronbach’s
alpha
The text is more “rational” or “emotional” 0.701 0.860 0.541 0.722
The text shows the “product benefits” or
“creates likability towards the brand”
0.754 0.526
The text “describes quality, economy or value of performance”
or shows “positive or negative emotions”
0.756 0.576
Source:

Authors’ own work

Scales for measurement authenticity and brand attitude

Construct Dimensions Item Corrected item-total
correlation
Cronbach’s
alpha
Authenticity of brand voice (adapted from Bruhn et al., 2012) Continuity This text is consistent 0.456 0.858
This text is true to itself 0.545
This text offers continuity 0.456
This text follows a distinct concept* 0.469
Originality This text is different 0.334
This stands out from other texts 0.513
This text is unique 0.514
This text clearly distinguishes itself from other texts 0.483
Reliability I believe what is said in the text and that it will keep its promise 0.567
The text makes reliable promises 0.546
This text is credible 0.562
Naturalness The text does not seem artificial 0.493
The text makes a genuine impression 0.629
The text gives the impression of being natural 0.549
Authenticity of brand (adapted from Bruhn et al., 2012) Continuity This brand is consistent over time 0.510 0.903
This brand is true to itself 0.595
This brand offers continuity 0.542
This brand follows a distinct concept 0.569
Originality This brand is different 0.633
This brand stands out from other brands 0.676
This brand is unique 0.676
This brand clearly distinguishes itself from other brands 0.619
Reliability This brand is believable, and I think it will deliver what it promises 0.648
This brand makes reliable promises 0.640
This brand is credible 0.630
Naturalness This brand does not seem artificial 0.463
This brand makes a genuine impression 0.641
This brand gives the impression of being natural 0.568
Brand attitude (adapted from
Spears and Singh, 2004)
I find the brand appealing 0.794 0.922
I find the brand good 0.791
I find the brand pleasant 0.810
I find the brand favourable 0.779
I find the band likable 0.817
Note:

*Discarded item

Source: Authors’ own work

Appendix. Example of stimulus material

Adidas Chatbot conversation sourced from: www.adidas.co.uk/help

Accessed May 5, 2020 (Figure A1).

References

Aaker, D.A. (1991), Managing Brand Equity: Capitalizing on the Value of a Brand Name, Free Press, New York, NY.

Armstrong, J.S., Brodie, R.J. and Parsons, A.G. (2001), “Hypotheses in marketing science: literature review and publication audit”, Marketing Letters, Vol. 12 No. 2, pp. 171-187.

AX Semantics (2022), “Case studies”, available at: https://en.ax-semantics.com/casestudies (accessed 24 February 2021).

Baumgarth, C., Kirkby, A. and Kaibel, C. (2021), “When fake becomes real: the innovative case of artificial influencers”, in Pantano, E. (Ed.), Creativity and Marketing: The Fuel for Success, Emerald Publishing Limited, Bingley, pp. 149-167.

BBC (2020), “MP proposes law on labels for digitally-altered body images”, available at: www.bbc.co.uk/news/uk-england-leicestershire-53959130 (accessed 3 December 2021).

BBC (2021), “Influencers told not to use 'misleading' beauty filters”, available at: www.bbc.co.uk/news/uk-england-55824936 (accessed 3 December 2021).

Bentler, P.M. (1990), “Comparative fit indexes in structural models”, Psychological Bulletin, Vol. 107 No. 2, pp. 238-246.

Berger, B., Adam, M., Rühr, A. and Benlian, A. (2021), “Watch me improve—algorithm aversion and demonstrating the ability to learn”, Business & Information Systems Engineering, Vol. 63 No. 1, pp. 55-68.

Bigman, Y.E. and Gray, K. (2018), “People are averse to machines making moral decisions”, Cognition, Vol. 181, pp. 21-34.

Bogert, E., Schecter, A. and Watson, R.T. (2021), “Humans rely more on algorithms than social influence as a task becomes more difficult”, Scientific Reports, Vol. 11 No. 1, pp. 1-9.

Brown, S., Kozinets, R.V. and Sherry, J.F. (2003), “Teaching old brands new tricks: retro branding and the revival of brand meaning”, Journal of Marketing, Vol. 67 No. 3, pp. 19-33.

Bruhn, M., Schoenmüller, V., Schäfer, D. and Heinrich, D. (2012), “Brand authenticity: towards a deeper understanding of its conceptualization and measurement”, Advances in Consumer Research, Vol. 40, pp. 567-576.

Busser, J.A. and Shulga, L.V. (2019), “Involvement in consumer-generated advertising: effects of organizational transparency and brand authenticity on loyalty and trust”, International Journal of Contemporary Hospitality Management, Vol. 31 No. 4, pp. 1763-1784.

Cambier, F. and Poncin, I. (2020), “Inferring brand integrity from marketing communications: the effects of brand transparency signals in a consumer empowerment context”, Journal of Business Research, Vol. 109, pp. 260-270.

Castelo, N., Bos, M.W. and Lehmann, D.R. (2019), “Task-Dependent algorithm aversion”, Journal of Marketing Research, Vol. 56 No. 5, pp. 809-825.

Chander, A., Wang, J., Srinivasan, R., Uchino, K. and Chelian, S. (2018), “Working with beliefs: AI transparency in the enterprise”, IUI Workshops.

Cheng, Y. and Jiang, H. (2021), “Customer-brand relationship in the era of artificial intelligence: understanding the role of chatbot marketing efforts”, Journal of Product & Brand Management, Vol. 31 No. 2, pp. 252-264.

Cheng, X., Bao, Y., Zarifis, A., Gong, W. and Mou, J. (2022), “Exploring consumers’ response to text-based chatbots in e-commerce: the moderating role of task complexity and chatbot disclosure”, Internet Research, Vol. 32 No. 2, pp. 496-517.

Clerwall, C. (2014), “Enter the robot journalist”, Journalism Practice, Vol. 8 No. 5, pp. 519-531.

Davenport, T.H. (2019), “Can we solve AI’s ‘trust problem’?”, MIT Sloan Management Review, Vol. 60 No. 2, pp. 18-19.

Dutton, D. (2003), “Authenticity in art”, in Levinson, J. (Ed.), The Oxford Handbook of Aesthetics, Oxford University Press, New York, NY, pp. 258-274.

European Commission (2021), “Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts [2021] COM 206 final, 2021/0106(COD)”.

Fardad, F. (2019), “In a digitalized world, consumers yearn for authenticity from brands”, available at: www.adweek.com/brand-marketing/in-a-digitalized-world-consumers-yearn-for-authenticity-from-brands/ (accessed 20 August 2021).

Gilmore, J.H. and Pine, B.J. (2007), Authenticity: What Consumers Really Want, Harvard Business Review Press, Boston, USA.

Götz, S. (2019), “Bitte entschuldige mein unvermögen [please excuse my inability]”, available at: www.zeit.de/mobilitaet/2019-06/roboter-semmi-deutsche-bahn-kundenservice-test (accessed 20 August 2022).

Graefe, A., Haim, M., Haarmann, B. and Brosius, H.B. (2018), “Readers’ perception of computer-generated news: credibility, expertise, and readability”, Journalism, Vol. 19 No. 5, pp. 595-610.

Harris, G. (1987), “The implications of low-involvement theory for advertising effectiveness”, International Journal of Advertising, Vol. 6 No. 3, pp. 207-221.

Henderson, G.R., Lacobucci, D. and Calder, B.J. (1998), “Brand diagnostics: mapping branding effects using consumer associative networks”, European Journal of Operational Research, Vol. 111 No. 2, pp. 306-327.

Hustvedt, G. and Kang, J. (2013), “Consumer perceptions of transparency: a scale development and validation”, Family and Consumer Sciences Research Journal, Vol. 41 No. 3, pp. 299-313.

Imma (2023), “Imma.gram Instagram profile”, available at: www.instagram.com/imma.gram/ (accessed 20 March 2023).

Inbar, Y., Cone, J. and Gilovich, T. (2010), “People’s intuitions about intuitive insights and intuitive choice”, Journal of Personality and Social Psychology, Vol. 99 No. 2, pp. 232-247.

Insider (2021), “Influencers in Norway will soon have to disclose when paid posts include edited or manipulated body photos”, available at: www.insider.com/norway-law-social-media-influencers-advertisers-disclose-edited-images-2021-7 (accessed 3 December 2021).

Iwama, K. and Kano, Y. (2018), “Japanese advertising slogan generator using case frame and word vector”, Proceedings of the 11th International Natural Language Generation, Tilburg, The Netherlands, pp. 197-198.

Jago, A.S. (2019), “Algorithms and authenticity”, Academy of Management Discoveries, Vol. 5 No. 1, pp. 38-56.

Juniper Research (2021), “Conversational commerce channels to facilitate spending of over $290 billion globally by 2025, as omnichannel strategies drive interest”, available at: www.juniperresearch.com/press/conversational-commerce-channels-to-facilitate#:∼:text=Chatbots%20to%20Account%20for%2050,over%20the%20next%20four%20years/ (accessed 20 August 2022).

Khawar, S. (2022), “Biggest sportswear brands- ranked according to 2021 yearly revenue”, available at: www.totalsportal.com/list/biggest-sportswear-brands/ (accessed 25 June 2022).

Knapp, P. (2017), “AI, meet brand voice”, available at: https://landor.com/thinking/ai-meet-brand-voice (accessed 27 February 2021).

Köbis, N. and Mossink, L.D. (2021), “Artificial intelligence versus Maya Angelou: experimental evidence that people cannot differentiate AI-generated from human-written poetry”, Computers in Human Behavior, Vol. 114, p. 106553.

Kohli, G.S. and Yen, X. (2020), “Brand voice”, in Foroudi, P. and Palazzo, M. (Eds), Contemporary Issues in Branding, Routledge, New York, NY, pp. 116-131.

Kotenidis, E. and Veglis, A. (2021), “Algorithmic journalism—current applications and future perspectives”, Journalism and Media, Vol. 2 No. 2, pp. 244-257.

Kotler, P. and Armstrong, G. (1994), Principles of Marketing, 6th ed., Prentice-Hall, Englewood Cliffs, NJ.

Laakasuo, M., Palomäki, J. and Köbis, N. (2021), “Moral uncanny valley: a robot’s appearance moderated hot its decisions are judged”, International Journal of Social Robotics, Vol. 13 No. 7, pp. 1679-1688.

Lee, A.Y. and Labroo, A.A. (2004), “The effect of conceptual and perceptual fluency on brand evaluation”, Journal of Marketing Research, Vol. 41 No. 2, pp. 151-165.

Lil Miquela (2023), “Lil Miquela Instagram profile”, available at: www.instagram.com/lilmiquela/?hl=en (accessed 20 March 2023).

Little, T.D., Bovaird, J.A. and Widaman, K.F. (2006), “On the merits of orthogonalizing powered and product terms: implications among latent variables”, Structural Equation Modeling: A Multidisciplinary Journal, Vol. 13 No. 4, pp. 497-519.

Longoni, C. and Cian, L. (2020), “Artificial intelligence in utilitarian vs. Hedonic contexts: the “word-of-Machine” effect”, Journal of Marketing, Vol. 86 No. 1, pp. 91-108.

Luo, X., Tong, S., Fang, Z. and Qu, Z. (2019), “Frontiers: machine vs. Humans: the impact of artificial intelligence chatbot disclosure on customer purchases”, Marketing Science, Vol. 38 No. 6, pp. 913-1084.

Mahmud, H., Najmul Islam, A.K.M., Ishtiaque Ahmed, S. and Smolander, K. (2022), “What influences algorithmic decision-making? A systematic literature review on algorithm aversion”, Technological Forecasting and Social Change, Vol. 175, p. 121390.

Marsh, H.W., Balla, J.R. and Hau, K.T. (1996), “An evaluation of incremental fit indexes: a clarification of mathematical and empirical properties”, in Marcoulides, G.A. and Schumacker, R.E. (Eds), Advanced Structural Equation Modeling Techniques, Lawrence Erlbaum, Mahwah, NJ, pp. 315-353.

Meng, J. and Dai, Y.N. (2021), “Emotional support from AI chatbots: should a supportive partner Self-Disclose or not?”, Journal of Computer-Mediated Communication, Vol. 26 No. 4, pp. 207-222.

Mitchell, O. (2015), “Experimental research design”, in Jennings, W.G. (Ed.), The Encyclopaedia of Crime & Punishment, John Wiley & Sons, pp. 1-6.

Molleda, J.C. (2010), “Authenticity and the construct’s dimensions in public relations and communication research”, Journal of Communication Management, Vol. 14 No. 3, pp. 223-236.

Montal, T. and Reich, Z. (2016), “I, robot. You, journalist. Who is the author? Authorship, bylines and full disclosure in automated journalism”, Digital Journalism, Vol. 5 No. 7, pp. 829-849.

Morhart, F., Malär, L., Guèvremont, A., Girardin, F. and Grohmann, B. (2015), “Brand authenticity: an integrative framework and measurement scale”, Journal of Consumer Psychology, Vol. 25 No. 2, pp. 200-218.

Mozafari, N., Weiger, W.H. and Hammerschmidt, M. (2021), “Trust me, I’m a bot – repercussions of chatbot disclosure in different service frontline settings”, Journal of Service Management, Vol. 33 No. 2, pp. 221-245.

Mullan, A. and Kidney, E. (2020), “Humanising of the brand voice on social media: the case of government agencies and semi-state bodies”, Journal of Digital & Social Media Marketing, Vol. 7 No. 4, pp. 344-354.

Napoli, J., Dickinson, S.J., Beverland, M.B. and Farrelly, F. (2014), “Measuring consumer-based brand authenticity”, Journal of Business Research, Vol. 67 No. 6, pp. 1090-1098.

Neururer, M., Schlögl, S., Brinkschulte, L. and Groth, A. (2018), “Perceptions on authenticity in chat bots”, Multimodal Technologies and Interaction, Vol. 2 No. 3, pp. 1-19.

Nunnally, J.C. (1978), Psychometric Theory, Second Edition, McGraw-Hill, New York, NY.

OpenAI (2020), “A robot wrote this entire article. Are you scared yet, human?”, available at: www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3 (accessed 19 May 2021).

Osorio, M.L., Centeno, E., Cambra-Fierro, J. and Castillo, E. (2022), “In search of fit or authenticity?”, Journal of Product & Brand Management, Vol. 31 No. 6, pp. 841-853.

Pérez, A. (2019), “Building a theoretical framework of message authenticity in CSR communication”, Corporate Communications: An International Journal, Vol. 24 No. 2, pp. 334-350.

R Core Team (2022), “R: a language and environment for statistical computing”, available at: www.R-project.org/ (accessed 28 August 2022).

Reisenbichler, M., Reutterer, T., Schweidel, D.A. and Dan, D. (2022), “Frontiers: supporting content marketing with natural language generation”, Marketing Science, Vol. 41 No. 3, pp. 441-452.

Rosseel, Y. (2012), “Lavaan: an R package for structural equation modeling”, Journal of Statistical Software, Vol. 48 No. 2, pp. 1-36.

Schapals, A.K. and Porlezza, C. (2020), “Assistance or resistance? Evaluating the intersection of automated journalism and journalistic role conceptions”, Media and Communication, Vol. 8 No. 3, pp. 16-26.

Schneider, A. (2021), “NLG platform AX semantics workshop”, Workshop introducing NLG language for product description and how to use the platform, (Online) Berlin, DE.

Schreiber, J.B., Nora, A., Stage, F.K., Barlow, E.A. and King, J. (2006), “Reporting structural equation modeling and confirmatory factor analysis results: a review”, The Journal of Educational Research, Vol. 99 No. 6, pp. 323-337.

Schudson, M. (2015), The Rise of the Right to Know: Politics and the Culture of Transparency, Harvard University Press, Cambridge.

Schumacker, R.E. and Lomax, R.G. (2004), A Beginner's Guide to Structural Equation Modeling, 2nd ed. Lawrence Erlbaum Associates, Mahwah, NJ.

Spears, N. and Singh, S.N. (2004), “Measuring attitude toward the brand and purchase intentions”, Journal of Current Issues & Research in Advertising, Vol. 26 No. 2, pp. 53-66.

Srinivasan, R. and Sarial-Abi, G. (2021), “When algorithms fail: consumers’ responses to brand harm crises caused by algorithm errors”, Journal of Marketing, Vol. 85 No. 5, pp. 74-91.

The Guardian (2021), “Robot artist to perform AI generated poetry in response to Dante”, available at: www.theguardian.com/books/2021/nov/26/robot-artist-to-perform-ai-generated-poetry-in-response-to-dante#:∼:text=Ai%2DDa%20will%20perform%20the,a%20human%20poet%20would%20do%E2%80%9D (accessed 5 January 2022).

Van der Kaa, H.A.J. and Krahmer, E.J. (2014), “Journalist versus news consumer: the perceived credibility of machine written news”, paper presented at the Computation + Journalism Symposium, 24-25 October, New York, NY, Columbia University.

Vincent, J. (2018), “Burger King’s ‘AI-written’ ads show we’re still very confused about artificial intelligence”, available at: www.theverge.com/tldr/2018/10/3/17931924/burger-king-ai-ads-confusion-misunderstanding (accessed 5 January 2022).

Völkle, C. and Planing, P. (2019), “Digital automation of customer contact processes – an empirical research on customer acceptance of different chatbot use-cases”, in Lochmahr, A., Müller, P., Planing, P. and Popović, T. (Eds), Digitalen Wandel Gestalten, Springer Gabler, Wiesbaden, pp. 217-229.

Webecho-Bamberg (2022), “Digitalministerin gerlach für kennzeichnungspflicht von KI [digital minister gerlach in favor of mandatory labeling of AI]”, available at: https://webecho-bamberg.de/tag/judith-gerlach/ (accessed 23 June 2022).

Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S. and Martins, A. (2018), “Brave new world: service robots in the frontline”, Journal of Service Management, Vol. 29 No. 5, pp. 907-931.

Wojdynski, B.W. and Evans, N.J. (2016), “Going native: effects of disclosure position and language on the recognition and evaluation of online native advertising”, Journal of Advertising, Vol. 45 No. 2, pp. 157-168.

Yang, J. and Battocchio, A.F. (2020), “Effects of transparent brand communication on perceived brand authenticity and consumer responses”, Journal of Product & Brand Management, Vol. 30 No. 8, pp. 1176-1193.

Yang, C. and Hu, J. (2022), “When do consumers prefer AI-enabled customer service? The interaction effect on brand personality and service provision type on brand attitudes and purchase intentions”, Journal of Brand Management, Vol. 29 No. 2, pp. 167-189.

Yu, S.Y. (2021), “Research on user experience and brand attitudes of chatbots”, World Academy of Science, Engineering and Technology International Journal of Humanities and Social Sciences, Vol. 15 No. 8, pp. 698-704.

Acknowledgements

This work was supported by the Berliner ChancengleichheitsProgramm (BCP); and the Institut für Angewandte Forschung Berlin (IFAF).

Funding information: Jörg Henseler gratefully acknowledges financial support from FCT Fundação para a Ciência e a Tecnologia (Portugal) and national support through a research grant from the Research Centre for Information Management – MagIC/NOVA IMS (UIDB/04152/2020).

Conflict of interest: Jörg Henseler admits that he has a financial interest in ADANCO and its distribution partner, Composite Modelling.

Corresponding author

Alexandra Kirkby can be contacted at: alexandra.kirkby@hwr-berlin.de

About the authors

Alexandra Kirkby is a Marketing Research Associate at the Hochschule für Wirtschaft und Recht Berlin and a Junior Researcher and PhD candidate at the University of Twente. She has experience in the influencer industry, having worked as an Artist and Repertoire Manager and then Sales Manager at multi-platform network Studio71, and has first publications on the topic of artificial influencers and AI brand voice. The topic of her PhD is on the implications and impact of artificial intelligence on brand voice.

Carsten Baumgarth studied, obtained his doctorate, and habilitated at the University of Siegen. From 2006 to 2010, he taught as an Associate Professor at Marmara University Istanbul (Turkey). Since 2010, he has been a professor of Marketing with a focus on Brand Management at the Hochschule für Wirtschaft und Recht Berlin. Since 2017, he has been an Adjunct Professor at Ho-Chi-Minh-City University (Vietnam). To date, he has over 400 publications focusing on branding, B-to-B-marketing, Culture Marketing and Empirical Research. His publications appeared in journals such as the Journal of Business Research, Industrial Marketing Management, European Journal of Marketing, Journal of Marketing Communications, Journal of Brand Management, Journal of Product and Brand Management and Marketing ZFP. Furthermore, he is the author of the standard book Brand Policy (fourth edition, 2014) and the editor of the book B-to-B Brand Management (second edition, 2018). His research has repeatedly received national and international Best Paper Awards.

Jörg Henseler holds the Chair of Product-Market Relations in the Faculty of Engineering Technology at the University of Twente, the Netherlands. Moreover, he is a Visiting Professor at NOVA-IMS, Universidade Nova de Lisboa, Portugal, and a Distinguished Invited Professor in the Department of Business Administration and Marketing at the University of Seville, Spain. His research covers empirical methods of marketing and design research, as well as the management of design, products, services and brands. He is a co-inventor of consistent partial least squares (PLSc), the heterotrait-monotrait ratio of correlations (HTMT) and confirmatory composite analysis (CCA). He is a highly cited researcher according to Web of Science; his work has been published in the International Journal of Research in Marketing, Journal of the Academy of Marketing Science, Journal of Business Research, MIS Quarterly and Organizational Research Methods, among others. He chairs the Scientific Advisory Board of ADANCO, software for composite-based structural equation modelling (http://www.composite-modeling.com).

Related articles