Artificial intelligence based decision-making in accounting and auditing: ethical challenges and normative thinking

Purpose – This paper aims to identify ethical challenges of using artificial intelligence (AI)-based accounting systemsfordecision-making and discussesits findingsbasedonRest ’ sfour-componentmodelofantecedentsfor ethical decision-making. This study derives implications for accounting and auditing scholars and practitioners. Design/methodology/approach – This research is rooted in the hermeneutics tradition of interpretative accounting research, in which the reader and the texts engage in a form of dialogue. To substantiate this dialogue, the authors conduct a theoretically informed, narrative (semi-systematic) literature review spanning the years 2015 – 2020. This review ’ s narrative is driven by the depicted contexts and the accounting/auditing practices found in selected articles are used as sample instead of the research or methods. Findings – In the thematic coding of the selected papers the authors identify five major ethical challenges of AI-based decision-making in accounting: objectivity, privacy, transparency, accountability and trustworthiness. Using Rest ’ s component model of antecedents for ethical decision-making as a stable framework for our structure, the authors critically discuss the challenges and their relevance for a future human – machine collaboration within varying agency between humans and AI. Originality/value – This paper contributes to the literature on accounting as a subjectivising as well as mediating practice in a socio-material context. It does so by providing a solid base of arguments that AI alone, despite its enabling and mediating role in accounting, cannot make ethical accounting decisions because it lacks the necessary preconditions in terms of Rest ’ s model of antecedents. What is more, as AI is bound to pre-set goals and subjected to human made conditions despite its autonomous learning and adaptive practices, it lacks true agency. As a consequence, accountability needs to be shared between humans and AI. The authors suggestthatrelatedgovernanceaswellasinternalandexternalauditingprocessesneedtobeadaptedinterms of skills and awareness to


Introduction
Companies and financial service firms alike increasingly use Artificial Intelligence (AI) to aggregate and transform data from various sources and derive better decision-relevant information in complex environments (Jarrahi, 2018;Joseph and Gaba, 2020) to gain economic benefits.AI can be seen as an umbrella term in this global mega-trend that includes Big Data approaches (Gepp et al., 2018;Salijeni et al., 2018) and sophisticated machine learning five principal researchers, who have academic, as well as practical backgrounds in accounting, auditing, sociology and information sciences (Dwivedi et al., 2021;Jeacle and Carter, 2014).
Our paper is structured as follows.After providing a literature background on ethical decision-making, we first narratively (theory-driven, semi-systematically) review and interpret associated AI-based decision-making contexts in ABS/AJG ranked scholarly articles from 2015 to 2020 and identify related ethical challenges in these.Second, to provide a solid framework for our following conceptual discussion of the moral antecedents of ethical decision-making, which includes the perspective of cognitive states (Bedford et al., 2019;Orlitzky, 2016) of both involved humans and AI, we utilise the well-established Rest's fourcomponent model of morality (Rest, 1986(Rest, , 1994) ) and map the found challenges to it.Rest's model is one of the most salient ones in terms of accounting research use (Baud et al., 2019;Lombardi et al., 2015;Lombardi, 2016;Sorensen et al., 2015).It looks at moral behaviour as a prerequisite for ethical decision-making and builds on rational traditions while also considering the actors' cognitive states.This is especially important given that we expect a human and AI collaboration in the near future, which may potentially lead to a competition between purely rational cognition and value-based moral interpretation amongst the involved actors, for example when it comes to levels of certainty or doubt over some scenarios.Finally, our paper ends with a critical and normative discussion of the findings in terms of potential future human-machine collaborations, from which we suggest theoretical as well as practical implications and future research.

Ethical decision-making and AI in accounting
A background on ethical decision-making In general, ethical decision-making refers to the process in which individuals use their personal moral base to determine whether a certain action is right or wrong (Christopoulos et al., 2016;Kish-Gephart et al., 2010;Sturm, 2015).This process is thus characterised by moral issues and agents, both embedded in organisational and societal contexts.A moral issue arises when an individual's behaviour can either help or damage others.A moral agent is an individual who acknowledges the presence of a moral issue and acts according to their personal moral code (Zollo et al., 2016).Factors that constitute an ethical or unethical decision vary between individuals, communities and environments (Christopoulos et al., 2016).Thus far, two main approaches have emerged in the literature on ethical decisionmaking: rational (connected to measurable outcomes) and intuitive (led by an intrinsic morality) traditions (McManus, 2018;Zollo et al., 2016).To first reconcile both traditions, Zollo et al. (2016) consider moral intuition as a forerunner to an ethical decision-making process to be blended with rational moral reasoning and introduces the concept of synderesis, as the natural capacity or disposition (habitus) of humans to generally allow a simple apprehension of what is good.However, by introducing machine actors, this stream of research might not be particularly helpful as the presence of a conditio humana in AI is precisely what needs to be questioned and not assumed!What is more, given the ability of AI to perform complex cognitive processes together with its autonomous learning and adapting, it becomes clear that any framework to understand AI-based ethical decisionmaking needs to include a perspective on the underlying cognitive states of the involved actors.
One of the most salient models on moral processes as antecedent for ethical decisionmaking in the literature is Rest's (1986Rest's ( , 1994) ) four-component model (Fleischman et al., 2017;Shawver and Shawver, 2018;Valentine and Godkin, 2019;Zollo et al., 2018).It is rooted in the rational tradition as described above, but entails an awareness of actors' cognitive states (Hirsh et al., 2018).This model presumes that an ethical decision results when individuals AI based decision making in accounting complete the following four psychological processes: (1) attain moral awareness/sensitivity regarding the existence of an ethical issue, (2) apply moral judgement to the problem to decide what is right, (3) formulate and possess moral motivation to act ethically and (4) develop moral character, that is power, to translate their ethical intent into appropriate moral behaviour (Paik et al., 2017;Weber, 2017;Zollo et al., 2016).
Moral awareness is the first and most important component in Rest's ethical decisionmaking process because it determines whether a situation contains moral content and can and should be considered from a moral perspective (Morales-S anchez and Cabello-Medina, 2013).The decision-maker in this step exhibits "sensitivity", as referred to by Rest (1994), towards considering others' and their welfare.A lack of moral awareness can lead to unethical decision-making due to situational, individual and motivational bias influence (Kim and Loewenstein, 2021;McManus, 2018).For example, McManus' (2018) paper discusses how hubris leads to individuals' failure to display moral awareness within their decision-making.
Moral judgement is the second component in which the decision-maker makes a moral judgement on an identified ethical issue, that is judgement on "what is considered as morally correct" (Zollo et al., 2016).The decision-maker in this phase assesses "good" and "bad" (Morales-S anchez and Cabello-Medina, 2013) outcomes regardless of personal interest.Thus, he/she can decide "which course of action is more morally justifiable" (Morales-S anchez and Cabello-Medina, 2013).
Moral motivation is the third component, which differs from moral judgement due to the strong role of personal interests.Moral judgement allows others to assess various decision outcomes.In contrast, moral motivation and the resulting moral intention (Kish-Gephart et al., 2013;Kish-Gephart et al., 2010) are based on factors of the "self", for example by looking at the damage a morally correct action might cause for the actor herself.Moral intention thus also connects to the willingness to act on the judgement.This condition may well lead to a divergence between judgement and action, something, that is well reflected in the fictional literature on AI, for example in Asimov's "Three Laws of Robotics" (Anderson, 2007), which can be summarised below: (1) A robot may not injure or harm a human indirectly through its inaction.
(2) A robot must obey human beings' orders, except where such orders would conflict with the first law.
(3) A robot must protect its own existence providing this protection does not conflict with the first and second laws.
Finally, moral character (or behaviour) is the fourth component, which involves the execution and implementation of previously found moral intention (Rest, 1986(Rest, , 1994)).Translating moral motivation and intention into moral character however also depends on individual and environmental challenges (Hannah et al., 2015) and the given agency.Such agency for AI in accounting would involve conferring formal decision-making power to the AI-based accounting system and necessitates the trust of those following its recommendation.
Rest's established four component model as a framework for ethical decision-making as a process will provide a proven structure for us to evaluate the specific influence of the identified ethical challenges in an AI-based future and guide our normative thinking when we summarise and elaborate on the potential future of human-machine collaboration in accountancy.
AI and ethical decision-making in the accounting and auditing literature Gong (2016) focuses on ethical decision-making in accounting in his critical book review.He suggests that actors and the complexity of their interactions are a major source of ethical AAAJ 35,9 dilemmas.This complexity will only be aggravated with the addition of smart, AI-based robotic co-workers (Huang and Vasarhelyi, 2019) as actors with varying degrees of agency (i.e. agreed power).Hence, transferring the recent insights of Dillard and Vinnari (2019) on critical dialogical accountability to these future scenarios of robot-human interactions, it may be interesting to determine who the responsible actors are and might be (Dalla Via et al., 2019).
To add another dimension, Martin (2019b) researches complex algorithms used in machine learning as the basis of all AI actors.These algorithms (Kellogg et al., 2020) are inherently value-laden and create positive and negative moral consequences based on ethical principles (Brougham and Haar, 2017;Martin, 2019b).Martin (2019b) further conducts a comprehensive ethical review about software developers as the source of these algorithms and discusses their responsibility and accountability.AI-based algorithms' core concept implies that they learn independently from the available data and do not follow predefined rules (Lindebaum et al., 2020).Therefore, data are the underlying "fuel", and potential source of bias, of algorithms and thus have to be accurate and meaningful during training and realtime application (Al-Htaybat and von Alberti-Alhtaybat, 2017;Arnaboldi et al., 2017b;Baker and Andrew, 2019;Gepp et al., 2018;Warren et al., 2015).Munoko et al. (2020) study the ethical implications of AI in auditing.Similar to Jarrahi (2018) they both distinguish between three scenarios of a human-machine collaboration in their study with varying related ethical issues.The first step of an AI implementation is called the assisted AI, designed to "support humans in taking action".Augmented AI is the second step in which parts of the decision-making process are handled by AI (Losbichler and Lehner, 2021).The third step, albeit in the more distant future, is the autonomous (or strong) AI, where AI decides which data to include for its decision-making and also has been given agency and trust to execute these decisions (Glikson and Woolley, 2020;Lehner et al., 2021).Each of these scenarios displays a different agency level for the AI and thus some of the components in Rest's model will become more or less relevant.

Research design
This paper is rooted in the hermeneutics tradition (Bleicher, 2017;Francis, 1994;Prasad, 2002) of interpretative accounting research (Chua, 1988;Francis, 1994;Lukka and Modell, 2010;Parker, 2008) and is based on a theory-driven, semi-systematic literature review.Our interpretation of the literature follows the hermeneutic circle in which the reader and the data engage in a form of dialogue.In this, the pre-understandings of the researchers play a key role and are crucial for drawing meaning from the text.In our case, our research team consists of five people from different academic backgrounds, who bring to the table theoretical and practical accounting/auditing know-how, and an embeddedness in sociological theory and computer/information sciences (Dumay and Guthrie, 2019;Jeacle and Carter, 2014).Based on the above, the hermeneutic tradition also demands a critical and reflexive attitude to identify potentially unwanted pre-conceptualisations and ideologies and an awareness of the intrinsic transitions from conceptual pre-configurations to configurations and ultimately to potential re-configurations upon additional texts (Bleicher, 2017;Shalin, 2007).
The identified articles in our review are thus not meant to tell the story, but rather to induce and inspire our own narrative on the ethical challenges by providing the situative contexts and insights from which we can identify these.In addition, despite our strong endeavours to achieve a form of qualitative validity, for example through inter-coder reliability measures, it is not our claim to derive some sort of universal truth or test from our inquires but rather to pragmatically derive insights and inspire future accounting research from a variety of angles.

Data collection
To build the basis and identify the ethical challenges, we follow the recommendations of Parker and Northcott (2016) and Snyder (2019) and conduct a theoretically informed, narrative literature review, which semi-systematically synthesises a topic through interpretation.A semi-systematic or "narrative" review approach is designed for "topics that have been conceptualized differently and studied by various groups of researchers within diverse disciplines" Snyder (2019).
This approach provides an understanding of complex areas based on a qualitative content analysis rather than measuring effect sizes.Following Snyder (2019), Denzin (2018) and Parker and Northcott (2016), such an undertaking allows the detection of themes, theoretical perspectives and theoretical concept components.In our case, these themes are the ethical challenges in AI-based decision-making in accounting.Our protocol concerning the identification and selection of the articles is detailed in the next section, with our thinking grounded in a theoretical sampling strategy, employing, as Parker and Northcott (2016) indicate, a ". . .gradual broadening of sample selection criteria as the researcher develops their theory, particularly with a view to its encompassing wider variations that permit theoretical generalisation".
First, we scanned the 2018 Academic Journal Guide (AJG) (please see https://charteredabs. org/academic-journal-guide-2018/), published by the Chartered Association of Business Schools (ABS) for 3, 4 and 4*-rated journals from 2015 to 2020 from the fields of accounting (auditing), economics, finance as well as information sciences and general management (including business ethics) that had accounting related content, leading to 148 journals of (some) relevance.Then, we used the Scopus database to search for titles and abstracts within these previously identified journals using the keywords below.
(1) Artificial intelligence "AND" (critics "OR" critique) (2) Artificial intelligence "AND" (challenges "OR" implications) (3) Artificial intelligence "AND" future (4) Artificial intelligence "AND" (ethics "OR" moral "OR" justice) We alternatively replaced the term "artificial intelligence" with the following terms (and their derivations) in subsequent runs: (1) AI (2) Decision-making (3) Big Data (4) Robotic process automation "OR" robot (5) Smart machines (6) Automation This initial, broad-ranging search elicited 2,969 journal articles, including duplicates due to overlapping of different search terms.Next, the duplicated articles were eliminated, leaving 2,472 articles.True to the nature of a semi-structured review, that aims for understanding and rich insights rather than completeness or clear boundaries, we then reduced the amount of "somewhat" relevant articles to a more manageable number that seemed to provide actual insights (compared to for example simply describing a novel technology).We did this in two steps: First by an interpretive reading of the abstracts, which filtered out articles irrelevant to our deeper interest to find out more about ethical challenges in decision-making situations in combination with the technologies as listed above.This interpretive analysis decreased our AAAJ 35,9 sample to 609 articles.In our second step, we read the remaining 609 articles' introductions, discussions and conclusions sections to assess whether the respective article was providing deeper insights into ethical decision-making processes and situations.We discarded 482 articles of the 609 as these were not providing actual discussions or settings of some form of decision-making with the help of AI or Big data.
At the same time, we also added 11 articles outside of our original sampling that were heavily cited and seemed particularly relevant in the 609 selected articles, because they provided further and deeper insights, regardless of their ABS/AJG journal classification.This led to the inclusion, for example, of four articles from the International Journal of Accounting Information Systems, a 2 rated ABS/AJG journal.
This two-step reduction (and expansion) process overall: 2,472 (discarding 1,863) → 609 (discarding 482, adding 11) → 138, resulted in our final sample of 138 articles from 43 journals.It became already clear at this stage, that some journals tend to attract certain topics, with roughly 20% (8) of these 43 journals contributing the majority (76) of the articles.Please see Tables 1 and 2 for a list of all journals.
Many higher-ranked dedicated accounting journals have not embraced AI or related Big Data, whereas highly ranked management journals, for example the Journal of Business Ethics and also some of the Academy of Management journals are already quite attentive to this topic and context.

Data evaluation
We followed Denzin (2018) in our interpretivist approach to a thematic analysis.This approach identifies data patterns as stories or "meaningful units", given that language constitutes social meaning.In other words, rather than comparing individual reports, our analysis focused on identifying similarities and dissimilarities and resulting patterns across the narrations of the situations in the articles.The intention was to analyse depicted situations and processes in detail in terms of their settings and the relevance of ethical decision-making in these.From the 138 articles, 1,671 meaningful units (as detailed and exemplified in the three examples below) were extracted for further analyses using ATLAS.TI qualitative coding software.These meaningful units typically comprise one or few connected sentences that deal with a certain situation or process and are clearly connected (see Table 3 for examples).
It is noteworthy that at this stage the researchers' judgements and previous experience may strongly influence such research.Completely preventing this situation may be futile.Thus, we used several measures to enhance the qualitative validity of this study and included various checks and balances (Parker and Northcott, 2016), such as protocolled inter-coder reliability measures.Consequently, all five authors and two research assistants, read and coded the meaningful units.For this, memos were written by the individual researchers, based on emerging questions about potential patterns and codings (Parker and Northcott, 2016).Any disputed topics were brought up and discussed until all researchers had reached a coding convergence.This reading and coding by such a large number of researchers was necessary to enhance intercoder-reliability.
This data analysis method thus involved the joint interpretation of the various expressions and manifestations of AI in decision-making situations by the five plus two coders.The 1,671 meaningful units were coded inductively and recursively (i.e. a newly emerged code found in later stages might be applied to earlier text fragments when re-reading), resulting in 238 firstorder codes.These codes were then aggregated into 50, more comprehensive and abstract, second-order codes, as we gradually developed a more holistic understanding of the essence of the first-order codes.Finally, we further condensed and aggregated these second-order codes based on their essence into five challenges of objectivity, privacy, transparency, accountability and trustworthiness as inductive top-level themes (Denzin, 2018).

AI based decision making in accounting
To exemplify this process, please see Table 3, with examples of meaningful units and their coding.
As a final step, the five plus two researchers conducted an intense two-day workshop, along with two additional outside academics from sociology and accounting, referred to as "advocati diaboli" to identify potential flaws in our thinking, and we critically discussed the five identified challenges and clarified their scope.In this workshop the seven researchers went through one challenge after the other and looked at several archetypical situations in which these challenges were depicted in the selected papers (see Table 4 for an indication of authors and articles).Using these found situations and the challenges identified in these, we then debated what an ethical decision-making process would look like in these, and how, when, and why, the identified challenges would inhibit it.For this, we mapped these challenges to our chosen framework of the four components of Rest's process model to provide a solid theoretical anchoring for our debates and bring in a well-established structure of the processes and antecedents of ethical decision-making.Thus, while the challenges evolved inductively through the interpretation and aggregation of the coded meaningful units (second order codes to themes), they were then connected to the individual components of ethical decision-making and discussed given different scenarios of a human-machine collaboration from the data.
In the following section we present our findings of the five emerged themes in detail with examples from the data and discuss their consequences for the components in Rest's process model.Results: five challenges to AI-based ethical decision-making in accounting Objectivity Objectivity and related bias problems were a salient and repeating topic in our findings when it comes to decision-making.For example, Sun (2019) writes about the application of deeplearning in audit procedures for information identification and its challenges based on barely traceable bias and overly complex data structures.In addition, Arnaboldi et al. (2017a) explore social media and Big Data's information alteration and consequently biased decisionmaking processes.Leicht-Deobald et al.'s (2019) study also explains the use of AI in evaluating people's job and loan applications and finds ample evidence of discrimination.Nevertheless, the literature also provides examples of how AI has helped overcome accounting and auditing bias.For example, S anchez-Medina et al. (2017) study the impact of a change in norms on the going-concern status (to the better) based on auditors' use of AI.
Looking deeper at the contexts in the articles, algorithms underlying AI and Big Data were identified as the contributing factors to most ethical challenges for AI-based decisions.These algorithms, for example, were seen to process people's loans, warn of potential credit loss and identify payment patterns (Buhmann et al., 2019;Kellogg et al., 2020).However, these algorithms are the output of human work, and the supplied data stems from the past and is often selected by humans, hence bearing the potential of bias.Consequently, rather than asking whether AI can be objective, the questions could be as follows: How can humans make objective algorithms?Is the data they feed the algorithms free of inherent bias?Training an AI system to ignore race, gender and sexual orientation and make its loan decisions based on other information is possible.However, such a system can only be created with the help and moral awareness of human experts who create and train AI systems.
This particular challenge of objectivity thus mainly impacts the second and third components: moral judgement and moral motivation in Rest's model as both will be flawed given biased information or algorithms.It needs to be dealt with accordingly, for example through clear guidelines and awareness building for developers and employees.On the other hand, AI can also provide opportunities to overcome human bias, as Daugherty et al. (2019) for example state, "What if software programs were able to account for the inequities that have limited the access of minorities to mortgages and other loans?In other words, what if our systems were taught to ignore data about race, gender, sexual orientation, and other characteristics that are not relevant to the decisions at hand? " (p.60) Privacy (and data protection) Privacy and related data protection problems were found to be another key challenge associated with adopting AI-based decision-making in an accounting setting (Martin, 2019a;West, 2019;Wright and Xie, 2017).Privacy is one of the most salient drivers of ethical concerns due to the rapid and largely unregulated increase in Big Data for use in AI-based systems (Arthur and Owen, 2019;Shilton and Greene, 2017).As AI evolves and chooses the sources of its data autonomously, its use of personal information is achieving a new level of power and speed that may not easily be comprehendible by users nor transparent.Conceptualising this, Martin (2019a) for example states: ". . . the results here suggest consumers retain strong privacy expectations even after disclosing information.Privacy violations are valued akin to security violations in creating distrust in firms and in consumer (un)willingness to engage with firms" (p.65).Strongly related to this, Wright and Xie (2017) focus on the importance of expectation management and state: "Companies can effectively set, and re-affirm, privacy expectations via consent procedures preceding and succeeding data dissemination notifications" (p.123).
Research on privacy in the context of algorithmic AI is limited as researchers have mostly focused on data privacy and violations in general (West, 2019).Big Data has been at the AI based decision making in accounting foundation of this research (Alles, 2015;Warren et al., 2015) due to its increasing introduction into the specific context of accounting and auditing (Baker and Andrew, 2019).For example, Gepp et al. (2018) write about Big Data techniques in auditing research and practice and explore current trends and future opportunities.
When it comes to privacy and data protection, Blockchains are often seen as an accounting and auditing innovation, given its data storage in a secure, distributed ledger (Cai, 2021;McCallig et al., 2019).Blockchains provide tamper-proof encrypted storage of data that also allows traceability of who has entered and changed the data.Such traceability is important for auditing and creates transparency for the stored data and is also important for building trust, as will be discussed later.Moll and Yigitbasioglu (2019) see that access to distributed ledgers in the blockchain and Big Data with algorithmic AI will automate decision-making to a large extent.These technologies may significantly improve financial visibility and allow more timely intervention due to the perpetual nature of accounting.However, contrary to data protection, the impact of leaked documents to prevent fraudulent activities (for example in Wikileaks) should not be underestimated as Andrew and Baker (2020) examine in the context of US oil interests in Nigeria.Moreover, West (2019) proposes the term "data capitalism" and examines how surveillance and privacy logics are currently being redefined.He states: "Data capitalism is a system in which the commoditization of our data enables an asymmetric redistribution of power that is weighted toward the actors who have access and the capability to make sense of information" (p.20).The challenge of privacy thus interferes with the third component in Rest's model, moral motivation, as it includes deliberate and non-deliberate violations.
Widespread criticism towards under-and over-regulation of data protection (Huerta and Jensen, 2017), specifically in the European Union's (EU) General Data Protection Regulation (GDPR) can be seen as a good indicator of politicians' and experts' difficulty of foreseeing the development of digital, data-driven, AI-based business models, for example those used in FinTechs.So far, in the USA, there is no similar regulation on data protection, aside from some cover within the California Consumer Privacy Act 2020 and the proposed Algorithmic Accountability Act.In Australia, the Australian Privacy Act 1988 however is comparable to the GDPR.Article 22 of the GDPR further grants the right of a human intervention when it comes to decisions.In other words, individuals have a right to ask a human to review the AI's decisionmaking to determine whether or not the system has made a mistake.This places a legal obligation on the business to be able to make such a judgement which requires the explainability of AI-based decisions.Such judgements would need traceability of the factors that influenced the decision and also transparency concerning the inner workings of the algorithms behind the decisions.The broad scope and impact of this demanded transparency is discussed next.Transparency (In)transparency as a challenge to AI-based decision-making became often only indirectly evident in the scenarios.One of the reasons for this may be the severe under specification of the nature and scope of transparency.In these cases, transparency was often only described as an important boundary condition for other concepts such as trust or accountability.Glikson and Woolley (2020) for example (referring back to Pieters (2011)) explore the role of transparency in facilitating trust and confidence in AI.
In addition to data transparency concerning its collection, creation, manipulation and use (Albu and Flyverbom, 2016), the literature allowed us to identify another major problem which is that algorithms and consequently the resulting decisions are often not transparent and explainable (Buhmann et al., 2019;Martin, 2019b).Neural networks as backbones for AI (Ristolainen, 2017) are often identified as black boxes based on proprietary code and structures (sometimes even implemented in discrete, untransparent hardware devices), which technology companies are unwilling to share with the public.These artificial neural networks include biologically inspired algorithms modelled loosely on the human brain for deep, reinforcement based learning (Sun, 2019;Sun and Vasarhelyi, 2018).This reinforcement learning means that the AI learns from the outcomes in comparison to its predictions.Thus, a neural-network-based, deep-learning AI constantly adapts and changes its behaviour based on environmental responses.However, such environmental influences are highly complex and partially random.In these, AI's behaviour can be seen as neither deterministic nor transparent (see for example Glikson and Woolley (2020)).AI's lack of transparency also makes it difficult to uncover any potential biases (see the objectivity challenge above), which may come either from the algorithmic code or from the data with which an algorithm has been trained (accidentally or deliberately learnt later on).Therefore, it needs constant monitoring and traceability to decide the source of the identified bias.
Technology firms have increasingly become conscious of this topic after several scandals.Consequently, these firms, for example Google, recently published videos and other material to raise ethical awareness regarding the lack of transparency in algorithms (Leicht-Deobald et al., 2019).Moreover, auditing standards now often demand that the auditor will be held liable for an audit failure based on accounting and auditing information system decisions.This situation additionally emphasises the need for transparent and explainable AI decision-making that provides traceability and auditability of its algorithms (Munoko et al., 2020).That said, even if such traceability would be technically achievable, if it may not be explainable in easy terms or understandable by most professionals (including auditors), it would still be of limited use.Consequentially, knowledge of the underlying concepts of AI algorithms, their use scenarios and their limitations are required to ensure explainability and thus transparency of algorithms.Yet, concerning the algorithms, even software developers struggle because of highly complex code that has accumulated over time and in different teams.
From another, contrarian perspective, complete transparency in certain situations may be neither possible nor desirable (Martin, 2019b;Robertson et al., 2016) as it may violate privacy (see literature in the previous section), subjectivise employees (see for example (Mehrpouya and Salles-Djelic, 2019)) or reveal trade secrets.Thus, the demand for, and level of transparency differs between cases (Kemper and Kolkman, 2019).For example, the transparency needed for corporate social responsibility (Cortese and Andrew, 2020) differs from that for an algorithm that decides where to place an advertisement (Albu and Flyverbom, 2016) or whom to hire (Glikson and Woolley, 2020).From an organisational perspective, Albu and Flyverbom point out: "In most accounts, transparency is associated with the sharing of information and the perceived quality of the information shared.This narrow focus on information and quality, however, overlooks the dynamics of organizational transparency" (p.268) Finally, it is important to be aware of the fact that if all the processes involved in an algorithm's decisions are made transparent, people could begin to easily manipulate the (self-learning) algorithms based on that understanding (Arnaboldi et al., 2017b;Leicht-Deobald et al., 2019) and particularly influence the data fed to the algorithm in order to get "favourable" results.
Transparency as a challenge thus interferes with many components in Rest's model, as it can be seen as a prerequisite to achieve a moral awareness and precludes others from assessing various decision outcomes as would be necessary for a moral motivation.It also can be seen as an important precursor for accountability and trustworthiness, as discussed in the next section, thus also impacting moral character (behaviour).

Accountability
Accountability has been well explored in the accounting and auditing literature (Abhayawansa et al., 2021;Ahn and Wickramasinghe, 2021;Cooley, 2020).Bebbington et al. (2019) for example examine accounting and accountability in the Anthropocene and Dalla Via et al. (2019) scrutinise how the different types of accountability (process or outcome) influence information search processes and, subsequently, decision-making quality.

AI based decision making in accounting
Furthermore, Brown et al. (2015) discuss how accounting and accountability can promote a pluralistic democracy, which acknowledges power differentials and beliefs.Thoughts that can be applied to the context of humans and AI collaborating and making decisions together.However, few studies actually examine accountability in the context of AI-based accounting systems apart from early but comprehensive insights from Munoko et al. (2020) in the context of auditing.
When software developers (and computer scientists) design an algorithm, they also design the delegation of accountability within the decision-making process (Buhmann et al., 2019;Martin, 2019b;Martin et al., 2019).Algorithms are sometimes designed to disassociate individuals from their responsibility by precluding users from taking an active role within the decision-making process.Therefore, inscrutable algorithms are autonomous and have less human intervention.Inscrutable algorithms (Buhmann et al., 2019) designed to be difficult to understand may force great accountability on their designers.Furthermore, if an algorithm is extremely complicated and difficult to understand, then the AI provider shall be held responsible, rather than the management and auditors (Kellogg et al., 2020;Martin, 2019bMartin, , 2020;;Munoko et al., 2020).
The argument that algorithms and Big Data are complicated to explain and often poorly understood does not relieve an organisation and individual from accountability, nor from making proper use of the data (Arnaboldi et al., 2017b).Otherwise, companies would have a motivation to create complex systems that help them avoid accountability (Martin, 2019b;Martin et al., 2019).From the perspectives of the individual, Arnaboldi et al. (2017b) further state "that accountants timidly observe big data at a distance without taking the lead as expected by accounting associations" (p.765) and Appelbaum et al. (2020) propose a framework for auditor data literacy and demand that "In this data-centric business environment, acquiring the knowledge and skills of data analysis should be a current professional priority" (p.5).
What is more, Ananny and Crawford (2018) see that even algorithm designers often cannot explain how a complex system works in practice or which parts of the algorithm are vital for its operation.They also add that the more an individual knows about a system's inner processes, the more he/she should be held accountable, similar to remarks by Arthur and Owen (2019).Ananny and Crawford (2018) further suggest that people must hold systems accountable by examining them, rather than privileging a type of accountability that needs to check inside systems.Cross-examining human-machine systems allows one to see them as sociomaterial phenomena (Orlikowski and Scott, 2008) that do not contain complexity but enact complexity by connecting to and intertwining with human and non-human assemblages (Lewis et al., 2019;Miller and Power, 2013).
Understanding that humans' responsibility is not limited to the use of AI algorithms can be seen as the first step towards promoting ethical AI-based systems.Numerous human impacts are embedded in algorithms, including auditors' criteria choices, the selection of training data, semantics and increasingly visual interpretation.Therefore, ethical algorithmic accountability must consider algorithms as objects of human creation and interaction and moral intent, including the intent of any group or institutional processes that may influence an algorithm's design or data feed.Lastly, human actors' agency (including power differentials) must also be considered when they interpret algorithmic outputs in the course of making higher-level decisions.This also means to focus on the coordination of responsibilities between accountants/auditors and specialists (Griffith, 2020) and needs to be strongly embedded in the "good governance" of such technologies.For example Brennan et al. (2019) find that challenges to good governance such as: "the accountability towards data ownership, having a voice in questioning data integrity or privacy around performance evaluations and assurance of such data become critical " (p.10).
While in traditional organisational settings human agency is well connected to accountability, it seems far less clean cut in the context of AI-based decision-making.

AAAJ 35,9
Thus, the challenge of accountability impacts the first, third and fourth components of Rest's model.First, moral awareness needs to be implemented by humans, thus for any ethical decision-making, we first have to ensure accountability for developers of algorithms and providers of data.Second, as personal interests are generally influenced by the level of accountability, it influences any moral motivation in human-machine settings.Third, as decision-making in AI is based on all three factors: human-made algorithms applied within AI, partly human-supplied, partly AI-selected data as a basis, and the delegation and distribution of agency between humans and AI as decided by humans, any normative calls for moral behaviour will need to understand accountability as rooted in the complex interplay between the various involved actors and see AI decision-making as embedded in a sociodynamic, sociomaterial system (Lawrence and Phillips, 2019;Orlikowski and Scott, 2008).

Trustworthiness
Trust is broadly defined as an individual's willingness to be vulnerable to another person (Martin, 2018).Trust is also strongly related to control, in other words to "mechanisms used by individuals and organizations to specify, measure, monitor, and evaluate others' work in ways that direct them toward the achievement of desired objectives" (Long and Sitkin, 2018, p. 725).
In accounting and auditing, trust has been studied at three levels: an individual's general trust disposition, trust in a specific firm and institutional trust in a market or community (Adelopo and Rufai, 2018;Chaidali and Jones, 2017;Glikson and Woolley, 2020;Mahama and Chua, 2016;Whelan, 2018).The concept and design of technology, the surrounding communication and the context of firms that employ technology can influence users' perception of its trustworthiness.Certain designs may inspire consumers to overly trust a particular technology in their interaction with the system, often through the lure of gamification (Thorpe and Roper, 2017).Martin et al. (2019) state that this scenario can be considered the fourth level of trust.Trustworthiness (Cui and Jiao, 2019) in AI is not only about what a system or decision-making process states it will do (integrity, ability) but also about having confidence that if the system's process cannot be understood, it will still be done in a manner that supports human interests (benevolence) (Mayer et al., 1995).What is more, as Glikson and Wolley point out (2020): "Users are not always aware of the actual technological sophistication of AI; while in some cases highly intelligent machines are acting in their full capacity, in others the capability may not be fully manifest in their behaviour" (p.628) The literature provides a variety of views on trust challenges related to AI, mostly related to the biases stemming from algorithms.As discussed, these biases can stem from issues related to responsibility, unethical use of shared data, transparency problems and the lack of accountability (Glikson and Woolley, 2020).According to a study of US consumers, people generally tend not to trust AI's decisions (Davenport and Kokina, 2017).The reason is that most people are not aware of how advanced algorithms work or how they come to conclusions.This brings with it the further notion of a propensity of trust (Alarcon et al., 2018), which has not been addressed in research on AI so far, in other words whether the individual's ability to trust would change between human and machine actors as counterparts and recipients in the context of accounting.
In addition, trustworthiness and corresponding trust were seen to be highly relevant to human-AI relationships because of the perceived risk embedded in these relations and the complexity and non-determinism of AI behaviour (Etzioni, 2017;Jeacle and Carter, 2011).Although an algorithm is initially designed by humans, AI systems that learn on their own are not explicitly taught under any moral guidance.Accounting professionals using AI often have no choice but to trust these systems.Normally, the basic unit of trust between humans is the physical appearance of the trustee.However, given that AI is intangible, AI embedment plays an important role in trust development between humans and AI (Glikson and Woolley, 2020).Successfully integrating AI systems into the workplace critically depends on how AI based decision making in accounting much employees trust AI (Jarrahi, 2018) and a humanisation of the technological actors seems to help.Hence, AI-based robots are given human names (e.g.Roberta) and communicate in ways that are familiar to office workers (Leitner-Hanetseder et al., 2021).
Trust and more specifically trustworthiness in this case can be seen as catalyst for any agency and meaningful engagement and thus as a necessary prerequisite for moral behaviour.If humans do not trust the decision-making processes of AI in accounting that are running in the background, then these decisions will not be taken up (lack of institutional trust).What is more, even if AI is trusted enough to come to the right conclusions and make the right decisions, a distrust by humans based on rational factors (e.g.lack of ethical guidelines for AI decisions) or irrational ones (e.g.refusal to take orders from machines) might compromise the execution of such decisions (lack of organisational trust).Thus, besides acting as a catalyst for moral behaviour in the Rest model, the different forms of trust can also be seen as strong moderators of all other ethical challenges and thus always indirectly concern all four components in our ethical decision-making model.
Summing up, not all of the identified ethical challenges influence Rest's four components for ethical decision-making equally, as summarised and illustrated in Figure 1.While trustworthiness can be seen as a catalyst and a prerequisite to overcome any of the other four potential challenges, the other challenges typically predominantly (but not exclusively) influence only one or two components.Another interesting case was the challenge of transparency, which was seen to moderate the impact of an objectivity related challenge on the moral judgement and moral motivation components and also builds an antecedent for accountability.One word of caution, while we were able to identify potential impacts of the challenges on an ethical decision-making process, the strength of these impacts on the individual components in Rest's model however may be moderated by the level of human-machine collaboration and related tasks and agency distributions (Jarrahi, 2018;Munoko et al., 2020).
Discussion: contributions, implications and outlook In the previous section we examined the identified dominant ethical challenges and their impact on different components in Rest's ethical decision-making process.In an ideal setup of human-machine collaboration, the human brain could ideate and make the final decisions, whereas AI would combine and analyse raw data and present the resulting information tailored automatically for different purposes (Raisch and Krakowski, 2021).What is more, the detailed examination of the individual components in Rest's model also demonstrates the necessity for future accounting leaders to understand how to make competent and situational use of AI (Brougham and Haar, 2017;Leitner-Hanetseder et al., 2021) and where the limits of AI might be (Losbichler and Lehner, 2021).Organisations would have to ensure a humanistic human-machine relationship by carefully guiding and governing the related processes.
One takeaway from the combined insights of this research might be the necessity to create (or broaden the scope) of an intra-firm governance committee to oversee and (internally) audit AI-based processes and related Big Data.This committee could critically examine algorithmic development, AI learning through presented data as well as the training of respective users; and subsequently review the decisions made in such humanmachine symbioses.Such an AI-governance committee could also develop ethical guidelines for the future of more autonomous AI and identify the related potential damage of AI-based algorithms a priori to come up with specific regulations.Future research on this would need to combine humanistic, legal/governance, accounting/auditing and information sciences perspectives to tackle questions such as the nature of fairness in AI, of good (model) governance of Big Data or the best practices concerning the development, training and use of AI-based accounting systems (Andreou et al., 2021;Brennan et al., 2019;Cullen and Brennan, 2017).Such endeavours would also connect well to ongoing research and practice on corporate sustainability accounting and reporting (Grisard et al., 2020;Mitnick et al., 2021) concerning environmental, social and governance (ESG) factors.After all, Big Data and AI will have a strong influence on the sustainability of a firm and may even be instrumental in the assurance of sustainability reports (Boiral et al., 2019;Silvola and Vinnari, 2021).Consequently, we expect the good (model) governance of AI and Big Data to become part of future assurance practises (similar to auditing risk models) and influence at least the G score of the ESG factors.
This article further contributes to the literature on accounting as subjectivising but at the same time mediating practice in a socio-material context (Miller and Power, 2013).It does so by providing a solid base of arguments that on the one hand, an AI-based accounting system as hybrid, networked actor with evaluative power over others cannot make ethical decisions on its own because it lacks the necessary preconditions in terms of Rest's model of components as antecedents.On the other hand, we also find that AI provides very strong support to other actors and enhances overall systemic decision-making by linking often widely dispersed actors and further data-rich arenas that were previously inaccessible because of cognitive limitations.What is more, as AI is bound to pre-set goals and still subjected to human made conditions despite its autonomous learning and adaptive practices, it will always lack true autonomous agency even if such would be formally bestowed (Murray et al., 2021;Tasselli and Kilduff, 2021;ter Bogt and Scapens, 2019).
An ethical AI-based decision-making process needs to start in the development phases of its underlying algorithms, demanding developers' moral awareness in the design phase to AI based decision making in accounting allow for later explainability and auditability.In other words, if the first and vital component of moral awareness is not enacted during an algorithm's design process, then all following process steps may fail.In the context of Weberian notions of formal and substantive rationality, Lindebaum et al. (2020) thus recognise algorithms as supercarriers of formal rationality (ter Bogt and Scapens, 2019).Algorithmic decision-making enforcing a formal rationality may imply the end of human (and thus moral) choices, not only through the suppression of substantive rationality but also through the transformation of substantive rationality into formal rationality via formalisation.In other words, the aim of achieving ethical AI poses challenges predominantly to its specific formalisation (Eccles, 2016;Lydenberg, 2013) as in order to "teach" AI-based algorithms human morality, this morality must first be conceptualised (formalised) in a manner that can be learnt, thus processed, by an algorithm (Lindebaum et al., 2016;Lindebaum et al., 2020).
Following our conversation in the previous paragraph on the accountability of algorithms, it becomes clear that allocating decision-making power solely to AI will result in unethical decisions (Kovacova et al., 2019;Leicht-Deobald et al., 2019;Lindebaum et al., 2020;Zollo et al., 2016) and the way forward may be a human-machine symbiosis with careful checks and balances in place.Additional research on the nature of this transformation in accounting is urgently needed (Munoko et al., 2020), particularly bringing in critical voices and perhaps also a turn towards normative thinking of how we want to create our future in the identified human-machine symbiosis.A further, related discussion on the societal values that would guide AI implementation and decision-making in accounting seems necessary.Does a shortterm shareholder value goal setting even provide the "right" guidance for AI systems in their decision-making?Current "human" managerial mitigation and a subjective stakeholder orientation based on moral and zeitgeist awareness may be completely missing in such decision-making.In other words, would the rationalism of AI, which strictly follows the learnt rules of the game, not inevitably lead to an unwanted dystopia based on inherent, yet partly veiled and mitigated, value schemes in our society?What is clear though is that from a sociomaterial perspective (Orlikowski, 2016), AI as accounting apparatus with its numerous embedded instruments of valuation will inevitably shape both values and valuers (Kornberger et al., 2017;Salijeni et al., 2021).
An interesting perspective for future theoretical research in this area can also be found in the calls for a more dialogic accounting (DA), following for example Manetti et al. (2021).True to the (necessary) interdisciplinary and critical nature of research into the ethical decisionmaking in future scenarios of human-machine collaboration, it seems prudent to rethink the nature of accounting information as a whole.Furthermore, the societal implications of an AI-based decision-making, together with the multi-modal, technology-driven turn in (sustainability) reporting (Busco and Quattrone, 2018;Larrinaga and Bebbington, 2021;Quattrone et al., 2021) might benefit from the inclusion of and dialogue with the stakeholders in specific decision-making processes.
Besides the theoretical, there are very clear practical implications of our findings and debate.As AI becomes stronger, additional guidelines and organisational structures need to be developed to maintain control of it while profiting from its strengths and versatility.Humans must always continue to exercise control over the execution of AI-based decisions to ensure moral behaviour and continuously examine the outcomes and arising ethical implications of AI decision-making.Simply implementing "textbook" boundaries in an AI-based accounting system will lead to a dystopia, because of the inevitably rational and morally devoid, albeit highly efficient execution by the machines.Accounting and auditing scholars interested in the larger societal implications of auditing as a practice and institution must however consider such a possibility to stay motivated in further exploring the ethical dimensions of AI-based technologies in our field.From a functionalist perspective, Asimov's three laws of robotics inevitably fall short when it comes to decision-making in a AAAJ 35,9 formalised accounting system that does not regard humans as more than consumers or a labour force.

Conclusion
We set out to provide a comprehensive, substantiated and critical conversation of the potential ethical challenges of AI-based decision-making in the larger area of accounting to assist researchers in driving the agenda forward and to allow policymakers and managers to make informed decisions concerning organisational challenges and necessary adaptions.
Looking deeply into the identified potential challenges, and the potential impact of these on the antecedents of ethical decision-making (based on Rest) in a human-machine collaboration, we have identified some key areas to focus on.The most salient ones were the importance of achieving transparent and auditable algorithmic designs, the importance of achieving trustworthiness, and the inevitably shared accountability between humans and AI because of their shared agency.
AI changes our profession and its organisational and societal relevance rapidly.While scholars and practitioners agree on the significance of ethical perspectives in our understanding of this change, and regulators discreetly stipulate human accountability even in complex AI scenarios, many of the connected debates remain on the surface.With this article we wanted to raise awareness of the necessity to look deeper into the specifications, processes and antecedents of ethical decision-making to address arising challenges by acting on a granular level.
From a normative perspective, after working for more than two years with the material, the five authors are unanimous in their opinion that the only humanist way forward is to aim for and create a scenario of a human-AI collaboration in accounting that still allows humans and societal values to guide certain decisions.In this, power and agency of humans and AI need to be carefully balanced, otherwise ethical decision-making cannot be assured in the future.

Figure 1 .
Figure 1.Findings summarised: ethical challenges and their potential relations to Rest's model

Table 4 .
The article overall (based on all 13 meaningful units in this article) was consequently used as evidence for the challenges of Transparency and Objectivity