Limits of artificial intelligence in controlling and the ways forward: a call for future accounting research

Purpose – Looking at the limits of artificial intelligence (AI) and controlling based on complexity and systemtheoretical deliberations, the authors aimed to derive a future outlook of the possible applications and provide insights into a future complementary of human–machine information processing. Derived from these examples, the authors propose a research agenda in five areas to further the field. Design/methodology/approach – This article is conceptual in its nature, yet a theoretically informed semisystematic literature review from various disciplines together with empirically validated future research questions provides the background of the overall narration. Findings –AI is found tobe severely limited in its application to controlling and is discussed from the perspectives of complexity and cybernetics. A total of three such limits, namely the Bremermann limit, the problems with a partial detectability and controllability of complex systems and the inherent biases in the complementarity of human and machine information processing, are presented as salient and representative examples. The authors thengo on and carefully illustrate howahuman–machine collaboration could look like depending on the specifics of the task and the environment. With this, the authors propose different angles on future research that could revolutionise the application of AI in accounting leadership. Research limitations/implications – Future research on the value promises of AI in controlling needs to take into account physical and computational effects and may embrace a complexity lens. Practical implications –AImay have severe limits in its application for accounting and controlling because of the vast amount of information in complex systems. Originality/value –The research agenda consists of five areas that are derived from the previous discussion. These areas are as follows: organisational transformation, human–machine collaboration, regulation, technological innovation and ethical considerations. For each of these areas, the research questions, potential theoretical underpinnings as well as methodological considerations are provided.

automation of demanding analytical activities (such as machine forecasts and artificial intelligence [AI]). While the automation of routine activities, particularly in large companies, is progressing successfully, the support of analytical activities seems to be considerably more difficult. According to a study by the German Federal Ministry of Economics, only 5% of German companies currently use AI in one of their divisions (Feser, 2020). The percentage of companies using AI in controlling is therefore negligible. At the same time, there are great expectations from the AI systems used in controlling (Seufert and Treitz, 2019). This article examines both the limits of the forecasting capabilities and the possible applications of the automated forecasts and provides a derived research agenda for our field.
The complaints about an uncertain and difficult to plan environment, the premature "being outdated" of planning and the budgetary "power games" have a long history. At the beginning of the 2000s, the Beyond Budgeting Round Table (BBRT) loudly called for an end to classical planning. In the course of the 2008 financial crisis, the term VUCA, which stands for volatility, uncertainty, complexity and ambiguity, became established as a synonym for the problem of the predictability of future developments (Bennett and Lemoine, 2014). In response to the then "new normal", concepts such as modern budgeting, scenario planning, bandwidth planning and rolling forecasts were presented, which in various ways propagated the abandonment of detailed, precise planning and forecasting (Lepori and Montauti, 2020). With the advent of digitisation, however, a paradigm shift seems to have begun. Access to new data sources (big data), almost unlimited computing power and AI systems has quickly led to keywords such as predictive analytics and the first applications of AI-based machine forecasts (Batisti c and der Laken, 2019; Brands and Holtzblatt, 2015;Earley, 2015;Mikalef et al., 2019;Qasim and Kharbat, 2019). This revived the belief in the predictability of the future (see Figure 1), at least until the outbreak of the corona crisis. The few field reports from predominantly large corporations seem to confirm the possibility of predictability through AI and the superiority of machine forecasts.
The differences between human and machine forecasting can be plausibly explained by the complementarity of human and machine information processing (Harris and Wang, 2019;Hofmann and Rothenberg, 2019). However, despite positive examples from experience, a realistic expectation is appropriate with regard to the forecast accuracy of machine planning and forecasting as there are limits to the ascertainability and planning capability of AI in a VUCA environment (Caglio, 2003;Warner and W€ ager, 2019). The above-mentioned limits shall now be discussed from the point of view of complexity and cybernetics in the next few sections before we move on to illustrate how a human-machine collaboration can look like and what this would mean for future research by providing an empirically validated research agenda.
Limits of predictability from the perspective of complexity and cybernetics Dealing with complexity is considered one of the greatest challenges in management today (Falschlunger et al., 2016;Reeves et al., 2020). Managers have to take into account an everincreasing number of factors in corporate management, which are also changing ever more rapidly and are highly interlinked. The main drivers of this development are globalisation and paradoxicallydespite the salvatory potential of itthe rapid progress of digitisation, which means networking the world in real time and increasing the speed of change. Cybernetics, in particular, has taken on the task of dealing with complexity. Pioneers such as Ashby, Beer, Forrester, Luhmann, Ulrich, Probst, Gomez, Malik, D€ orner and Vester created elementary foundations for this a long time ago (Luhman and Boje, 2001;Oll et al., 2016;Reeves et al., 2020), which are now more topical than ever before with regard to the limits of AI (Dwivedi et al., 2019). Exemplarily, the Bremermann limit (Bremermann, 1963;Frederick Malik, 1984) and the partial detectability and controllability of complex systems (Luhman and Boje, 2001;Zelinka et al., 2014) are further highlighted in this article.

Bremermann's limit
In accordance with Bremermann's limit, human knowledge has been set an insurmountable, absolute limit, which cannot be removed even by the greatest progress in digitisation. Because of the atomic nature of the matter, there is an upper limit to information processing that cannot be exceeded by any computer or brain consisting of matter with a mass M and the maximum speed of light c: in other words, no system consisting of matter can process more than ∼ 2 3 10 47 bits/per second per gram (Bremermann, 1962(Bremermann, , 1982 and by further including general relativity effects, the gravitational as well as Plank's constant, there is even an absolute limit of ∼10 43 bits/per second proposed, irrespective of the mass (Gorelik, 2009). As a consequence, even the most powerful cloud-based computer clusters, such as Hadoop (Zikopoulos and Eaton, 2011) might not ever have the necessary computing power for completely accurate forecasts in today's complex competitive environment, and Moore's law of doubling processing power ∼ every two years cannot be projected ad infinitum because of the stated physical limits of information processing (Gatherer, 2007). Malik made an interesting comparison in his habilitation thesis (see Malik, 2000), in which he determined the theoretical limit of the information processing capacity under the assumption that the entire mass of the earth since the beginning of the earth's history would be a gigantic computer permanently processing information. He contrasted this information processing capacity with the complexity of the typical decision-making situations in management, showing the limited ability to make predictions (Fredmund Malik, 2000).
Partial detectability and controllability of complex systems Figure 2 shows the structural makeup of complex systems such as our current economic system. They consist of a multitude of elements (Reeves et al., 2020) (a to h) and relationships (arrows between the elements), whereby the system breaks down into a part (a, b, d, e, g or h) Limits of artificial intelligence visible to the actuator A (manager or controller) and an invisible part (c or f). An example of an invisible element would be the corona virus before its outbreak. This has a significant consequence: we do not know that certain elements exist and hence cannot take them into account while making decisions. The system is therefore only partially detectable and can only be modelled incompletely in AI systems.
Furthermore, complex systems are divided into active elements (b and d), which change independently, and passive elements (a, c, e, f, h and g). Because of the active elements, complex systems have their own dynamics. They do not wait for the intervention of the actuator but change independently. Both the elements themselves and the relationships between the elements can change without any intervention. Consequently, the input (management interventions) no longer determines the output alone. Rather, the output is dependent on the input and the states of the system. Therefore, it constantly surprises us with its behaviour. Forrester (1974) described this as contrary to intuition because the known phenomena suddenly behave differently from what we expect on the basis of experience (D€ orner et al., 1983). This also applies to machine forecasts based on AI, which should ultimately be able to accurately predict the future on the basis of the past data (states of the system). The intrinsic dynamics of complex systems taking Bremermann's limit into account has profound consequences: the ideal of exact prediction becomes impossible. Rather, we must be content with patterns.
Finally, managers in complex systems have only limited control options. To achieve the goals, the actuator must change the state of certain elements. For the actuator, the elements of the system break down into elements that can be influenced directly (dotted lines from the actuator to the elements a, d and g), indirectly influenced (b, e and h) or not influenced (c and f). In addition, the isolated influence of the elements is difficult because they are highly interconnected, and the actuator is influenced by the elements themselves (dashed lines from the elements a, e and h to the actuator). This results in a limited control possibility in addition to the limited prognosis possibility.
In summary, it can be deduced from these two areas that the ideal of exact forecasts from a cybernetic and systems theory perspective remains an unattainable ideal even in the age of AI and machine forecasts. This is not to say, however, that machine forecasts cannot bring about improvements in controlling. On the one hand, the same result can be achieved by  automation with less effort, and on the other hand, an improvement in quality can be achieved through the complementarity of human and machine information processing.
Complementarity of human and machine information processing The question why machine forecasts might be superior to human forecasts can be answered primarily from the perspective of human rationality deficits. The performance or the limitations of the human brain in information reception and processing can be summarised as follows (see also (Haefner, 2000)): (1) People can only use information that they have learned or that is quickly available externally (e.g. on paper). The human brain has weaknesses in retrieving information.
(2) The human problem-solving space is relatively small. Only little information can be processed simultaneously. In short-term memory, no more than 5-9 information or sense units, so-called chunks, can be processed simultaneously (Miller, 1994(Miller, , 2003. (3) The brain tires and can only solve problems continuously for a limited period. Continuous thinking over a longer period is accompanied by an increasing frequency of errors.
(4) The brain works relatively slowly. The speed, however, depends on the type and the familiarity of the problem type: the lightning human pattern recognition of whether an apple is fresh or rotten vs the inertia of mental arithmetic.
Besides the capacitive "skill deficits", there are behavioural deficits. For example, people are content to achieve their individual aspirations and do not necessarily strive for the maximum achievable or they make decisions for personal benefit rather than for the benefit of the company. Cognitive limitations and behavioural patterns have been widely discussed in the literature. The long list of identified "biases" bears witness to this. The following examples show the typical human deficits in forecasting (Barberis and Thaler, 2003;de Graaf, 2018;Forbes, 2009): (1) Overestimating oneself often leads to optimistic forecasts.
(2) People unconsciously align forecasts with an "anchor" or orientation point. In forecasting, for example, this can be the budget or the previous year's values.
(3) The willingness to accept new information increases when the information supports the intention of the decision-maker.
(4) Power-related distortions of information, such as loss of reputation, mean that forecasts are maintained even when the opposite is already apparent.
(5) Discounting: as remote problems seem less significant than immediate ones, negative developments are not immediately communicated.
From the above examples, it is clear that the use of automatic forecasts can increase the quality of forecasts. On the one hand, a larger amount of information can be included in the forecast, and on the other hand, machine forecasts are not subject to the distortions caused by interests ("unemotional forecast"). However, caution is advised. An essential principle of AI is the ability to learn and improve. Optimisation algorithms can determine the accuracy of the model and adapt it to increase future accuracy. Even if AI systems have no self-interest, human biases can be learned unconsciously through the data provided to the system. In addition to the limitations of the human brain, one of its major strengths should be mentioned. The human brain constantly solves problems that are not posed by the human Limits of artificial intelligence brain. The brain does not have a static structure; it is rather constantly reorganised. Thus, problems are spontaneously seen in a new way. This characterises the creativity and innovative ability of the human being and is an essential difference from machines.

Human-machine collaboration
In our previous sections, we showed that (1) AI systems or machine forecasts are still not very widespread and are still in their infancy but are considered to be of great importance and have great potential for the future.
(2) The ideal of accurate forecasts remains unattainable even in the age of AI, but their use can improve human forecasting capabilities and automate or support the creation process.
This raises the question of how to best use machine forecasts. Should they replace or supplement human forecasts? Similar to autonomous driving, different levels of support can be distinguished from "Assisted Intelligence, Augmented Intelligence, Autonomous Intelligence" (Jarrahi, 2018;Munoko et al., 2020;Shank and DeSanti, 2018). With assisted intelligence, the entire forecast process remains in the hands of the controller. The AI or the automatic forecast works according to the concrete requirements of the controller, and the controller decides on the result of the forecast (see Figure 3). With augmented intelligence, the forecast of the controller and the automatic forecast run in parallel. The differences are analysed, and the controller or manager decides which result is used. An example of augmented intelligence in the forecast process is SAP AG. If the deviation of the forecasts exceeds the threshold value, the affected areas must explain why they think they are right and not the system. In the last stage of autonomous intelligence, the automatic forecast replaces the human forecast, and both controllers and managers rely on the AI system (see Figure 4).  Therefore, AI-based decision-making in accounting must use AI for the right purposes and processes given the specific context and situation, with each context raising different dominant challenges. Figure 5 illustrates an example, in which AI and humans would support each other in different ways in three different scenarios. What they all have in common is that the human brain would innovate and direct, whereas the AI would analyse raw data in various different ways depending on the purpose and provide an early interpretation of the findings. This detailed examination of the processes also demonstrates the necessity for future accounting employees to understand how to make competent and situational AI use (Briggs and Makice, 2012) and how future accounting work would appear with AI (Brougham and Haar, 2017;Lehner et al., 2021).

Source(s): Authors
In an uncertainty scenario where few risk functions are known, swift decisions are necessary, and the timely information and automatic detection of anomalies are key (Brougham and Haar, 2017;Donning et al., 2019). Objectivity and transparency are crucial to this scenario. In a complexity scenario, with an abundance of big data, the data processing would easily exceed the human cognitive capabilities, leading to an information overload (Falschlunger et al., 2016;Perkhofer and Lehner, 2019). A different support by AI seems appropriate in terms of the data analysis of unidentified features and correlations (Quattrone, 2016) to guide the decision-making (Huttunen et al., 2019), with the support of clever visualisations (Falschlunger et al., 2015).

Uncertainty
Make swift, intuitive decisions in the face of the unknown.
Provide access to realtime information (e.g., anomaly detection).
Decide where to seek and gather data. Choose among options with equal data support.
Collect, curate, process, and analyze data.
Negotiate, build consensus, and rally support.
Analyze sentiments and represent diverse interpretations.

Limits of artificial intelligence
The third scenario is also referred to by Jarrahi (2018) as an "equivocality" scenario. This scenario may be the most complex scenario for the human-machine symbiosis as it entails predominant challenges, such as ambiguity and, thus, the objectivity and the trust and accountability of those who make decisions. AI can analyse sentiments using text-interpretation algorithms and develop new representations of these unstructured data to support the decisionmaking (Quattrone, 2017).
Finally, and in addition to the level of support, the question of the level of entitlement to the AI must be considered. Similar to the analytics development stages, the expectation level to the AI system can merely be the provision of the relevant deviation information as a basis for the actual forecast (descriptive and diagnostic). In most cases, however, companies are not satisfied with this and implement a quantitative forecast (predictive). The highest demands are placed on an AI system that forecasts not only the probable outcome but also the necessary measures to achieve it (prescriptive). From today's perspective, however, this still seems to be a vision of the future.

Discussing research agenda in five areas
Summing up our deliberations on AI and controlling, we invite authors to follow up our call for future research and connect with their research to the ongoing discourse on the digitalisation of accounting in the Journal of Applied Accounting Research. The outcome of our collective research should also inform society on the broader opportunities and threats stemming from AI-based controlling and help them form an educated opinion on the implied societal changes with all of the corresponding ethical challenges.
At this point, we would like to acknowledge the fantastic support of our colleagues in drafting this research agenda based on their earlier works in Lehner et al. (2019). In a focus group, moderated by a co-researcher, the authors together with the above-mentioned experts in this field discussed the theoretical conceptions in the earlier sections of this article and from there first inductively derived five research areas and subsequently compiled a list of the mostpressing research questions for each. The resulting list was then presented and discussed at a large finance and accounting conference and participants (N 5 65) were able to vote on the relevancy of those questions via software Mentimeter (on a scale of 1-5 [highest]). Those questions with a relevancy of >3 are now presented as clustered by their research areas.

Research area 1: organisational transformation
Many scholars would agree that any change of such gravity in accounting most likely goes together with a substantial organisational and societal transformation (Troshani et al., 2019). Depending on the chosen theoretical framework, however, causations can be assumed in either or even neither direction between these two levels. Thus, the interplay between the nucleus of accounting transformation and the immediate organisational context as well as the larger societal context will be one of the important issues from an organisational science perspective.
Insights from empirical studies framed, for example, in a neoinstitutional theoretical setting that accepts the separation of human actors and structure (such as the norms and traditions of the accounting profession) and takes a certain drive for standardisation and isomorphic adaption for granted will certainly provide valuable starting points. Moreover, Giddens structuration theory (Englund and Gerdin, 2014) with its notion of transcending the structureagent separation towards a system of accountability with the situated practices (Conrad, 2014), Latour's actor-network theory (ANT) that adds non-humans as actors (Latour, 2005;Robson and Bottausci, 2018) and creates fluid accounting objects that are translated into a system and configuration theory (and earlier contingency theory) with its focus on the organisational gestalt or habitus (Bourdieu and Nice, 1977) being shaped by a complex contextual interplay (Otley and Berry, 1980) may be other worthwhile perspectives to understand and explain the organisational changes that we expect to see in the coming years.
What all of these theoretical approaches have in common is that they lean towards a pragmatic worldview, which is not limited by the often artificially conjured dichotomy of a realist versus constructivist ontology in the social sciences and thus allows researchers to embrace a variety of epistemological approaches with a range of suitable research designs. This may also be particularly necessary because the sheer dimensions in terms of size and speed (Crookes and Conway, 2018) and, particularly, the interconnectedness between the levels on which change is about to happen will potentially transcend the current literature on the change in organisations, while at the same time, we expect much of the current theory of change to remain at least partially valid in this new, rapidly changing context. Following Edmondson and McManus (2007), we believe that such an intermediate state of theory needs to be approached using mixed-methods designs, combining inductive and deductive reasoning.
From this perspective, we identified the following salient questions: (1) How will future accounting organisations look like in terms of structure and hierarchies (Kruskopf et al., 2020)?
(2) What is the role of societal values and their transformation in a digital age (Diller et al., 2020;Troshani et al., 2019;Vial, 2019) in the changes in the "whatness" of accounting?
(3) How can further system-theoretic and cybernetic approaches help to mitigate the overpromises of AI in terms of organisational capabilities?
(4) To what extent should AI-based robots (Cooper et al., 2019;Rozario and Vasarhelyi, 2018) be seen as actors in a network and how can we find out about their agency?
(5) How will AI transform not only the practices but also the structure as a result of their enactment?
(6) What is the role of technological leadership and change management (Makrygiannakis and Jack, 2016) in this?
Research area 2: human-machine collaboration A strong focus on the human and societal factors in the transformation towards AI-based management accounting seems timely and apt. On the one hand, it is certainly pressing from a practice point of view as the technological advancements will inevitably have a strong impact on the existing roles, duties and the corresponding skills of workers, managers and recipients of reports in the accounting profession (Neely and Cook, 2011), as well as on the stakeholders in general. On the other hand, we need to identify the ethical challenges in theory (Alles, 2020) to come up with normative agreements on how we want such a collaboration to look like. For the employees in the field, we need to understand the new job roles and matching qualifications that are necessary to not only persist in this new area but also to help deal with the aberrations that any change the process will inevitably bring, with the ultimate goal to further develop the accounting profession . Questions in this area will be about the career prospects and related skills and about how our education systems can deal with the demand, along with those about the necessary tools to support human cognition given a highly abstract and aggregated level of information (such as visualisations and interactions), those about the psychological factors when it comes to change management and the necessity to adapt and finally those of power and control. In this, the Foucauldian perspectives on what constitutes power from a critical discourse perspective may help to identify problematic developments and allow us to raise the right questions in the Limits of artificial intelligence society. The metatheories of capabilities or the resource-based view (RBV) (Alexy et al., 2018) may provide other suitable and less critical approaches to understand and guide the interplay between organisational leadership and the role of humans in an AI-augmented world . From a strategic management perspective, these theories may help us understand how a competitive advantage can be created and maintained given such rapid organisational transformations.
The decisive change in this collaboration for individuals can be seen as future AI will not only provide the decision-relevant information but also propose the decision itself on the basis of this very information. Following these lines of thought, how to ensure a biasfree cognition and the necessary transparency leading to this decision, as well as who should be held accountable (Munoko et al., 2020) will be amongst the most pressing issues. Thus, from the perspective of the individuals having to deal with the output and the decision-making of an AI system, several questions will arise. Such questions will not only include the role of trust in the decisions of such systems but also comprise more collective fears concerning how sustainable a functionalist, AI-based assessment without human values can be.
From this perspective, we identified the following salient questions: (1) What will drive the dynamics in a geographically disembodied, highly distributed and heterogeneous AI-empowered accounting team of the future (Leitner-Hanetseder et al., 2021)?
(2) Can we find an optimal way in terms of efficiency, effectiveness and humanist values for a collaboration between AI and humans in different contexts and tasks?
(3) Who will be the new "powerful" actors in such a human-machine collaboration?
(4) What will be the necessary skills to cope with the rising demands in terms of a "digital fluency"?
(5) How should and could accounting education incorporate the necessary adaptions to not only train students in the application but also understand the larger picture and be aware of the humanist values and the ethical challenges of an AI application?
Research area 3: regulation From the regulatory perspective, the need for transparency of the internal processes and internal decision-making criteria of the AI to comply, for example, with the General Data Protection Regulation (GDPR) criteria is still not sufficiently solved, and it may take a while to reach a satisfactory level. In the meantime, accounting and information systems researchers may need to look into which levels of transparency for which applications are really necessary. There will certainly be a difference from the perspectives of regulatory requirements, internal advisory systems based on AI-derived cost predictions and external compliance reports based on true big data when it comes to traceability, confirmability and finally, transparency. To solve the problem of transparency and accountability, researchers need to first fully understand how deep learning systems simulate cognition, particularly when it comes to multifunctional networks. The learning process based on feedback loops, which leads, for example, to the known problems of overfitting and easily introduces a potential sample bias, may provide more hurdles to overcome before a truly transparent, traceable and accountable AI system is possible (Buhmann et al., 2019;Leicht-Deobald et al., 2019;Martin, 2019). Besides the necessary regulatory changes, for example, those concerning labour rights and standards, taxation and data protection, other interesting insights may include the necessity to redefine the role of auditors and authorities to ensure compliance with these changes. Other worthwhile endeavours may be to define how accounting standards need to adapt to better reflect the quality and the worth of the collected data and the derived intelligence of such intangible assets.
Finally, research needs to carefully monitor and guide regulatory communication that not only is comprehensible by humans but also can be processed by accounting systems, such as the already existing International Financial Reporting Standards (IFRS) or Financial Accounting Standards Board (FASB) codifications.
From this perspective, we identified the following salient questions: (1) How can regulations be translated into a machine-readable format and to what extent will AI be able to interpret them teleologically?
(2) Do we need additional IFRS and US Generally Accepted Accounting Principles (US-GAAP) regulations on data as assets (Birch et al., 2020)?
(3) How can we find a balance between stifling over-regulation and the potentially negative externalities of unsupervised innovation?
(4) Who can and should be held accountable in terms of decision-making and the outcomes: AI or management?
(5) How to algorithmically define and enforce data rights and ensure protection and compliance to data regulations (Gruschka et al., 2018)?
(6) What will be the role of big data and public or private blockchains in the assurance of reporting (Bonyuet, 2020;Qasim and Kharbat, 2019)?

Research area 4: technological innovation and implications for accounting
Research in this area needs to look at information technology (IT) architectures and infrastructures, how these technological artefacts influence the practice and control of accounting systems and the role of big data and algorithms as drivers (Baker and Andrew, 2019;Huttunen et al., 2019;Salijeni et al., 2018). The above-described necessity to include the external data of various sources and with various formats into a vast, virtual data repository will bring forth many questions. Moreover, variable-efficient problem modelling that is informed by information-theoretical concerns of which data are needed and what may be available in abundance would catapult the current solution towards a considerably higher practical usability. For this, accounting and information science scholars will need to work together with data scientists to identify both theoretical frameworks and the corresponding algorithmic solutions (Kellogg et al., 2019;Kemper and Kolkman, 2019). From this perspective, we identified the following salient questions: (1) How should the ideal infrastructure be laid out depending on the tasks and context, including the considerations on cloud versus internal storage and computing power, speed, scalability and flexibility and, most importantly, availability?
(2) How can AI base its calculations and decisions on just the relevant information and use its resources efficiently, for example, through clever feature selection and by avoiding overly complex models. In other words, how can the human domain knowhow and the related heuristics be translated into the inner workings of AI and how can algorithms such as ridge or L2 regressions help to avoid overfitting to enhance external validity (Crowder, 2016)?

Limits of artificial intelligence
(3) How can standardisation not only help but also potentially diminish the (open) data exchange depending on the various sources in various contexts?
(4) Following the previous question, how can the inner workings of a deep learning network as the basis of an AI system be made transparent and traceable (Kemper and Kolkman, 2019) and how can the system create targeted communication (including visualisation) of complex data structured on an aggregated level that still allows us to validate the outcome by interaction?
(5) Related to this, how can an isomorphic bias, based on hindsight learning from the machine-based decisions (leaving out alternatives), be avoided, and what security measures need to be in place to control these problems (Glikson and Woolley, 2020)?
(6) How to ensure the practical decision-making of AI when the existing data do not sufficiently specify the problem at hand?
(7) How will quantum computing in the future affect the Bremermann limit of information processing power?
Research area 5: ethical implications Finally, and more importantly from a normative perspective (Alzola, 2017;Stahl and Flick, 2011), research needs to bring in the different voices from society about how ethical boundaries need to be in place when it comes to the decision-making of AI-powered accounting systems (Dwivedi et al., 2019;Glikson and Woolley, 2020;Munoko et al., 2020). The role of cultural standards and, potentially, the role of the firm itself need to be revisited. We already see, for example, in entrepreneurship research with its recent discussions on hybrid business models that environmental, social and commercial factors need to be taken into account when making strategic decisions. Such factors may be under-represented as the more unstructured and lessquantifiable non-financial information may be harder to process and considerably scarcer than the "hard" and easy-to-digest financial information. From the current streams of literature in digital accounting, it becomes clear that any ethical considerations need to be enforced by rules and regulations and cannot be based any more on the personal human values of managers (Kellogg et al., 2019;Kirkpatrick, 2016;Kovacova et al., 2019;Martin, 2019). The AI answers to how a data-derived strategy shall be put into place needs to be carefully monitored, and a societally accepted way of integrating the people, planet and profit thoughts into the mere functionalist approaches of non-human actors has to be found in a process that includes more than industry and policymakers. Any ethical considerationsas far as such considerations are even possible on a metalevel without a cultural contextwill need to be inserted as rules, and the impact of a potential sample bias in machine learning has to be looked at from various critical angles. However, such AI data-derived decision-making cannot have its merits as nepotism and other irrational behaviour of managers will be potentially reduced. Therefore, agency theory may well interplay with philosophical and (critical) sociological approaches to build a solid foundation of what the role of ethics should be in AI-based accounting ter Bogt and Scapens, 2019).
From this perspective, we identified the following salient questions: (1) How can social justice perspectives guide our thinking on the implementation of AI and its impact on the workforce? (Fia and Sacconi, 2018) (2) What is the role of "good" corporate governance (Haslam et al., 2019;Stacchezzini et al., 2020) in this and how can it be implemented?
(3) Can AI ever come to make ethical decisions given that the underlying algorithms (Kellogg et al., 2019;Lindebaum et al., 2020;Martin, 2019) might be biased and nontransparent?
(4) To what extent can we take up the existing utopian and dystopian fictional narratives, such as Asimov's three laws of robotics and machine meta-ethics (Anderson, 2007) as guidance for our quest in creating ethical regulations in robotic process automation (Gotthardt et al., 2020)?
(5) Will the completely rational thinking of AI bring forward the integrated injustice in a system that is based on short-terminism and shareholder value rather than on humanist value? Then, do we need a discussion of societal values in the age of AI first?

Conclusion
This paper set out to first explore the potential limits of AI and controlling based on complexity and system-theoretical deliberations. From there, we derived a future research outlook of the possible applications and provided insights into a future complementary of human-machine information processing. While this study was conceptual in its nature, a theoretically informed, semi-systematic literature review from various disciplines provided the background of the discussion, and we directed the reader to the relevant examples of the identified perspectives.
With this, we also wanted to demonstrate how a blend of theoretical foundation, academic validation together with behavioural insights and derived policy advice can help a larger target audience in their decision-making and conduct around AI in accounting.
As elaborated in the article, AI was found to be severely limited in its application to controlling with respect to complexity science and cybernetics. A total of three such limits, the Bremermann limit, the problems with the partial detectability and controllability of complex systems and the inherent biases in the complementarity of human and machine information processing, were presented as the salient and representative examples. We then went on to illustrate how a human-machine collaboration that made specific use of AI depending on the task and the environment could look like.
Finally, on the basis of our deliberations, we established a multidisciplinary research agenda consisting of five areas: organisational transformation, human-machine collaboration, regulation, technological innovation and ethical considerations. For each of these areas, we proposed different angles that could revolutionise the application of AI in accounting leadership and provided empirically validated, corresponding research questions with potential theoretical underpinnings as well as methodological considerations to the community.
With this early research, we aim to start the discourse and invite the larger scholarly accounting community to embrace the new topic and field. From a practical side, our deliberations should also serve teaching professionals, corporate executives, public policymakers and civil servants being confronted with questions around controlling and AI in a larger accounting context.