On humans, algorithms and data

Michela Arnaboldi (Dipartimento di Ingegneria Gestionale, Politecnico di Milano, Milan, Italy)
Hans de Bruijn (Delft University of Technology, Delft, The Netherlands)
Ileana Steccolini (University of Essex, Colchester, UK)
Haiko Van der Voort (Department of Technology, Policy and Management, TU Delft, Delft, The Netherlands)

Qualitative Research in Accounting & Management

ISSN: 1176-6093

Article publication date: 24 May 2022

Issue publication date: 6 June 2022

1745

Abstract

Purpose

The purpose of this paper is to introduce the papers in this special issue on humans, algorithms and data. The authors first set themselves the task of identifying the main challenges arising from the adoption and use of algorithms and data analytics in management, accounting and organisations in general, many of which have been described in the literature.

Design/methodology/approach

This paper builds on previous literature and case studies of the application of algorithm logic with artificial intelligence as an exemplar of this innovation. Furthermore, this paper is triangulated with the findings of the papers included in this special issue.

Findings

Based on prior literature and the concepts set out in the papers published in this special issue, this paper proposes a conceptual framework that can be useful both in the analysis and ordering of the algorithm hype, as well as to identify future research avenues.

Originality/value

The value of this framework, and that of the papers in this special issue, lies in its ability to shed new light on the (neglected) connections and relationships between algorithmic applications, such as artificial intelligence. The framework developed in this piece should stimulate scholars to explore the intersections between “technical” as well as organisational, social and individual issues that algorithms should help us tackle.

Keywords

Citation

Arnaboldi, M., de Bruijn, H., Steccolini, I. and Van der Voort, H. (2022), "On humans, algorithms and data", Qualitative Research in Accounting & Management, Vol. 19 No. 3, pp. 241-254. https://doi.org/10.1108/QRAM-01-2022-0005

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Michela Arnaboldi, Hans de Bruijn, Ileana Steccolini and Haiko Van der Voort.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Digital transformation is not new, but it has accelerated in recent years, driven by the Covid-19 pandemic and, before that, by advancements in technology (Hanelt et al., 2021; Agostino et al., 2021a, 2021b). A major element of change relates to the potential of new algorithmic applications, often referred to as artificial intelligence (AI). AI is a sometimes blurred term used to refer to a vast array of applications all rooted in machine intelligence, spanning across machine learning, deep learning and neural networks that attempt to reproduce the human brain’s capabilities (Lanzolla et al., 2020). At the basis of this revolution are algorithms powered, as never in the past, by great innovation in technologies for collecting, storing, sharing, modelling, processing and even visualising data. Technical advancement has been so rapid that organisations are often left adrift (Agostino et al., 2021a, 2021b; Arnaboldi, 2018) between blind enthusiasm and over scepticism (Arnaboldi et al., 2017).

While technical matters have been covered profusely in previous studies, there is resounding silence on the wider implications of AI and algorithmic digital transformation (Zuiderwijk et al., 2021) for people and organisations and in management. Even in simple applications, such as AI call centres and chat platforms, the impact is wide-ranging; users interact with machines, passing on data at every interaction, which are then used to improve the application or feed a data set. The application settings are prepared by humans, bringing in, alongside data scientists and information technology experts, also the managers involved in the service. Privacy managers must be included, as they need to keep a careful check on the risks and regulations – always evolving nationally and internationally. Human resources departments are also involved, as they have to seek out people with the right set of skills or, conversely, make assessments about potential employee redundancies. Upper management and policymakers, finally, are crucial, as they are ultimately responsible for the outcome of any application and its impact on direct and indirect users, which also means entering into the realm of ethics.

Algorithms, now and increasingly in the future, will affect our lives as citizens and users, our organisational settings and choices, our behaviour and our public services, directly or indirectly. The objective of both the special issue and this editorial is to lay the first stone of studies into algorithmic digital transformation and its implications on accounting and management, taking in the bigger picture by bringing in all “technical” aspects and issues and connecting them more closely with the organisational, individual and contextual factors that shape their actual functions and uses. The aim of this editorial, more specifically, is not to summarise papers, or rather, not merely to produce a synthesis but to lead off with reflections centred on the all-encompassing world of AI, providing a framework for successive studies. The framework is part of the genesis of this special issue and underlines the need to apply qualitative methods in the studies presented in this special issue. AI and the algorithm age, more generally, have been studied inside out at the technical and quantitative level; the operations and organisational transformations relating to their application are, instead, nearly totally unexplored. These are complex topics, and in that they are being pushed both by the framework itself and by the papers in this special issue, we have the tangible evidence that there is the real need for a deeper level of analysis based on qualitative research.

To pursue the purpose we have set ourselves, this article is organised as follows. The next section outlines the challenges that had emerged in previous studies, with our reference framework being then introduced in Section 3. Section 4 presents the papers that have been included in this special issue and the set of challenges and concerns faced in each case. Finally, in Section 5, we have drawn a series of conclusions to stimulate future research.

2. Challenges and responses in the algorithm age

As stated, algorithms require changes (to organisations, to practice and in mentality…) that go beyond calculation but which have effect on humans and organisations, because humans and organisations affect algorithms. In this section, we have looked into the challenges of algorithm hype. When put into actual practice, algorithms may probably come with challenges. How we respond to these challenges may, in turn, lead to new challenges. We have centred our arguments in this section on AI for two reasons: AI is a socio-technical innovation where humans and algorithms overlap, and it creates a new complex setting for decision-making and associated actions.

Much of the literature on AI is brimming with promises. The mainstream narrative is simple and powerful. AI means better information, and better information means better decision-making (Van der Voort et al., 2018). AI is said to have much potential for accounting and management, as it would be able to detect risks efficiently and in an “evidence-based” way, which in turn would improve the quality and timing of information available for decisions and actions (Höchtl et al., 2016; Mayer-Schönberger and Cukier, 2013). However, although the number of studies is still limited, there are also many critical voices. This situation would indicate that AI does come with not only promises but also challenges. We have defined six of these challenges here.

The first challenge is about bias. Algorithms are trained with historical data, and those data sets can contain biases, simply because of the selection of data and the features being considered; the algorithm can, therefore, become the amplifier of these biases. Employment advertisements for doctors, for example, are distributed using an algorithm that mainly directs them to male candidates, thus creating gender discrimination. The legacy reason is that, when an existing population of doctors is dominated by men, the algorithm learns that it gets more clicks or views when the advertisement is offered to men (Datta and Tschantz, 2015). The credit lending sector may be affected by similar mechanisms. An algorithm learns that giving a loan to applicants with certain characteristics is high risk and will preclude all such applicants. Bias may lead to undesirable outcomes, and these can spark many concerns affecting society more widely. Are algorithms guilty of discrimination? How ethical is it to take decisions about a person if this decision is based on a system that we do not completely know? (Boyd and Crawford, 2012; Leenes, 2016). Is the data quality accurate? If not, then how “evidence-based” is decision-making? Or does it work along the lines of garbage in, garbage out? (Kitchin, 2014). Who owns the data? Who has access to the data? For what and for whom are the data used? (Uprichard and Carrigan, 2015).

The second challenge relates to value conflicts. One example is the tension between accuracy and data privacy. The use of personal data can be important for the quality of algorithmic decision-making, but the collection and use of data are subject to sometimes strict conditions. A trade-off must often be made between data privacy, on the one hand, and the goals of algorithmic decision-making on the other. For example, if using algorithms leads to a high quality in public health, then should the trade-off between using personal data and accuracy of results be in favour of accuracy? (De Bruijn, 2021). There are other values that might be part of the trade-off, for example, safety, sustainability and fairness, and it may be impossible to maximise on all the values. What it means is that algorithmic decision-making will entail conflicts in values and favouring one value will be at the cost of another (De Graaf and Van der Wal, 2010; Steenhuisen and Van Eeten, 2008).

A commonly heard response to bias and value conflicts is a call for more transparency, challenge number three. If an algorithm can lead to bias, then those who use the algorithm must have a good understanding of how it works to detect these biases. The idea of “Explainable AI” (XAI) (Adadi and Berrada, 2018) is based on this concept and contends that the algorithm’s black box must be opened. The limitations are likewise pointed out in the literature; self-learning algorithms sometimes take decisions based on logics that are impossible to fathom, even by experts, and so transparency is a (near) impossibility (De Bruijn, Janssen and Warnier, forthcoming). The implication is that, for algorithms to be made or kept explainable, there may be further conflicts in values. If “explainability” means simplifying, then it will come at the cost of the effectiveness of algorithms, as “more complex models enjoy much more flexibility than their simpler counterparts, allowing for more complex functions to be approximated” (Barredo Arrieta et al., 2020). For data scientists, this means that accuracy and transparency in algorithms may be in conflict. There is a similar conflict for decision-makers. They have an interest in the effectiveness of the algorithm, as it informs the decisions they make. At the same time, as decision-makers have to account for these decisions, they have a vested interest in the algorithms being explainable (Goebel et al., 2018; Preece et al., 2018). The response to the challenge of bias and value conflicts will, itself, bring up new challenges.

The fourth challenge is the issue of control. While transparency is a response to the challenges of AI use, control is about who is on pole position in the response. Developing and using algorithms require specialised knowledge and skills. Data analysts can be seen as professionals who need discretionary freedom to fulfil their profession (Adams et al., 2020; Noordegraaf, 2020). The question, thus, is then who controls them? In traditional, hierarchical thinking, control is in the hands of managers, but it is questionable whether this control is sufficient, as AI and algorithms are simply too complex for many managers to handle (De Bruijn, 2002; Okwir et al., 2018). Algorithms sometimes already verge on being too difficult to understand even by experts, and the managerial echelon certainly does not have the skill to act as the experts’ countervailing power.

If this analysis is correct, then it has significant implications for the design of organisations that use algorithms extensively in their decision-making. In essence, if vertical hierarchical control does not work, then there is obviously the need for a more horizontal, peer-based control. This sort of overview can be achieved by building in more checks and balances, for example, by introducing competing teams of experts in the organisation who keep each other in order. In addition, algorithms will often be used by professionals who have not developed them. The more professionals rely on algorithms developed by others, the more powerful algorithm developers and intermediaries are likely to become. This point emphasises the importance of checks and balances along the chain from algorithm development to algorithm use.

Fifth and closely related to the previous challenge is the dilemma of algorithmic versus professional decision-making. Data-based decision-making is often positioned against decisions made by experts, who rely heavily on their tacit knowledge and intuition. Both forms of decision-making – data-based and intuition-based – have their strengths and weaknesses. Both can come with bias, lack of transparency and value conflicts. Consequently, organisations should not rely solely on only one type of decision-making. By using both types, the risks of each can be avoided separately. Suppose that a decision has to be made about an accounting issue and that an algorithm and expert human intuition are both used. Suppose that both types of decision-making lead to the same outcome, then it probably means that it is the right decision.

Suppose instead that the two types of decision-making lead to different outcomes. Should this be a reason for a more in-depth analysis? It is probably useful to know why the two perspectives differ. The confusion arising from this analysis may lie in the fact that only one of the two perspectives leads to the right decision or that a mix of the two does so.

This brings us to the sixth challenge: gaining trust. The use of algorithms and their acceptance does depend on not only the algorithms themselves but also the context in which they are used. The central concept here is trust. In a low-trust context, there will be, by definition, a lot of distrust in algorithmic decision-making. As a consequence, there will be more concerns about bias, little transparency, data harvesting and the lack of checks and balances than in a high-trust context. When evaluating algorithms, this context must, therefore, be taken into account. An organisation that uses algorithms and is confronted with a lot of distrust (e.g. among clients) will have to invest more on checks and balances and on transparency than an organisation that operates in a high-trust environment.

3. The challenges of algorithms and data analytics along two dimensions

In the previous section, we showed that AI usage may be contentious. Moreover, the common response to the challenges – transparency – may be contentious as well. Finally, the question of who should respond is open to discussion. Based on these observations, we see that complexity can be set out along two dimensions or axes.

The first axis deals with the inherent complexity of the technology and its applications and refers to the characteristics of the problem that is being dealt with by the algorithm. Depending on the specific case, there may be objections to using AI in terms of doubting its true accuracy and its effects on other values. In this instance, if a problem comes with both factual, relational and moral dissensus, then it is called “wicked” (Alford and Head, 2017). If the facts are clear and there is consensus about the values being affected, then there is a “structured problem”. AI-based call centres can give rise to structured problems. Call centres are often set up to give users generic information about a service and how that service can be accessed or on how to solve simple problems in filling out forms or carrying out various kinds of tasks. AI systems learn continuously from these calls to provide ever-more effective and useful information. When users are unsatisfied, they can usually shift to more traditional methods of interaction to find their answers.

Even AI call centre projects can move along the vertical axis and become more critical. Suppose the function of the contact centre is not just to provide information, but it is a contact point for collecting information from users about complex, sensitive issues, such as offering economic incentives/discounts to access medical services. In this case, alongside the information they receive, users also provide information that is then used. Moreover, the complexity of the issues in play means that there can be many possible ways to interpret the data.

Several of the issues presented in Section 2 are starting to emerge. Firstly, because of the two-way flow of information, as both senders and receivers of information, users could become suspicious of what happens to the information they send, introducing the need, for the service provider, to create trust and be transparent about how information collected is used. In asking for and collecting information, if the algorithm is set up in an appropriate way, then it will improve whatever knowledge it has. However, there will always be a choice in selecting information to be used to set priorities, which inevitably creates a bias. When the maturity of institutions and the people involved has an impact on the user (priority setting), this can be critical and raise conflicts in values. As an example of highly wicked purposes, suppose that an electricity transmission system operator has to make an investment decision about the deployment of high-voltage cables in a certain region. The complexity of the problem brings with it numerous issues. There are potentially different views on the values that are relevant. A trade-off must be made between service efficiency, the local area’s economy and ecological and health factors and choices have to be made about the priority of each, hence influencing how the algorithm is trained. What is/are the objective function/s to be considered? Even if you opt to set a multi-objective function, there will still need to be decisions about how to weight or prioritise the different outcomes. Furthermore, even geographical boundaries can be a problem when a larger set of impacts is included, such as the economy (e.g. labour and price of houses) or health. All the issues listed in Section 2 will become heightened, that is, all the concerns about bias, transparency, value conflicts and checks and balances (do we know whose facts and values are supported by the algorithm?).

The second axis shifts the attention from the “object” processed by the algorithm to the actors dealing with the algorithm itself and the rules they apply. In Section 2, we referred to the call for transparency in response to bias and value conflicts. We also referred to data scientists, professionals and managers as the possible initiators of these responses. Both the question of how to respond and who responds are institutional issues. Institutions are those guiding the way we tackle problems, wicked or otherwise. We thence identify the maturity of institutions as the second axis. In a highly developed institutional context, roles, tasks and organisations are clear and not subject to major change. The same holds for human–algorithm interaction. If they are bound to stable formal or informal rules, then at least the way we deal with wicked problems is relatively clear. In a narrow, formal sense, institutions are rules and regulations. Mature, formal institutions are hardly ever challenged or questioned. For instance, it is simply forbidden by law or formal practice to use certain algorithms for certain processes. In such a situation, it is easier to tackle the problem. A decision about the problem – including all its trade-offs – will apparently already have been taken by the lawmaker or the upper management in the organisation.

In a broader sense, institutions include also more informal, personalised rules (Thelen and Steinmo, 1992). For instance, tasks, roles and the way they are orchestrated in an organisation can be seen as institutions. In short, the very question of where humans stop and algorithms begin (all the papers in this special issue) indicates a weakly developed institutional context. The role of algorithms and humans does not appear to be well defined or subject to scientific research. The same holds true for the humans who work with algorithms. There is a semantic ambiguity of what “data analysts” and “data scientists” in fact do. Theirs is often seen as a new profession, one that still has to be developed in the context of more traditional organisational roles (Lismont et al., 2015; Kitchin, 2021). How this evolves may depend on the organisation.

At a higher level, roles within the functional chain, from data generation to decision-making, have yet to be settled (Janssen et al., 2017). These roles may become clearer cut if, for instance, the developers and users of algorithms are separated. They may work for different organisations, maybe even on different sides of the public–private divide. In such a situation, it is hard for users to keep tabs on the algorithms.

Here, we can also assume that the more mature these institutions are, the easier it is to tackle problems, wicked or not. If institutions are not mature, then roles within the chain, professional norms and solutions will all be subject to conflict. Value conflicts have neither yet been defined and nor have the responsibilities of those tasked to tackle these conflicts. Taking once again the example of a call centre used to access a service, the possible conflicts are marginal and linked to the (lack of) accessibility of some categories of people to digital services, an issue that is more broadly associated to service dematerialisation and consolidated themes explored in previous studies. When it is the call centre that “decides” who is further up the line to access the service, the context may become open to divergence; for example, there could be a variance in the professional views of those setting the algorithm in terms of the variables to be included (age, financial status, etc.). When conflicts are linked to values, an AI project becomes even more conflictual and critical because of the many possible ethical positions both politically and individually.

There is also a risk from the outside during the operation of a service, caused by possible heterogeneity among users. AI and machine learning algorithms learn and are capable of processing massive amounts of information, but their learning is rooted in rules determined in “human” settings. Suppose that the outcome of a medical diagnostic test is the input to an algorithm. A first element of divergence and conflict is associated to the level of trust in the institutions in question. Some users are totally open and ready to provide information, others become suspicious, raising, in extreme cases, popular opposition. Furthermore, the greater and greater heterogeneity among users, currently found in every country, brings with it a broader portfolio of reasons why some information, asked for or demanded, should not be provided. An example is the need to take a medical test that might clash with religious or cultural values. The “solution” of expanding the number of variables considered by the algorithm is only a partial answer, because it would be at least necessary to decide how to change the order of priorities to take in the new variables.

Figure 1 shows both axes. There is no clear-cut separation between the technological and the human sphere, with both being found on each axis.

4. Papers in this special issue

This special issue enters into some of the questions described above and proposes a selection of different perspectives and methodologies.

The first three papers cover the potential, critical issues and implications of using algorithms, AI and data analytics in specific industries, examined through the fields of education, auditing and delivery services.

Soncin and Cannistrà (2021) highlight the likely benefits of improving the way data analytics are used in education. Based on the Italian education system, the authors explore several possible organisational structures for using data analytics in education and propose three approaches to reflect different combinations of, and foci on, organisational layers, roles and data management, which are defined as centralised, decentralised and network-based. The centralised configuration is suggested as being typical of the early stages of data analytics/AI, where the central level manages the retrieval of data and the construction of the infrastructure in a context where technical skills still have to be built. The decentralised and the network-based configurations are seen as those representing a more mature stage of data analytics/AI, where attention to internal organisational needs are combined with centralised control.

The authors highlight the advantages and disadvantages of the three configurations, pointing to the critical dimensions to be taken into consideration, such as the need for a critical mass of human and technological resources and to be close to the students and processes that are to be observed. They, thus, conclude that a network-based model may represent a middle ground, enabling educational institutes to leverage on the strength of the network and, in particular, on the role of the educational data scientist to support the use of data. This paper, thus, illustrates the typical tensions between local and centralised systems in the collecting and analysing of data and how networked solutions may make it possible to balance matters of control, transparency and trust and also address any lack of technical expertise.

Tiron-Tudor and Deliu’s (2022) paper similarly focuses on the implications of digital innovations in industry, this time looking at the auditing sector and how it is being impacted by algorithms and AI. The authors set out the tensions and the mutual connections between professional and algorithmic decision-making. More specifically, in this study, they conducted a thematic analysis of relevant academic literature, together with reports published by the “big four” audit firms and accounting bodies, to investigate human–algorithms interaction in the auditing process. The authors highlight the strengths and weaknesses of algorithms compared to human beings, identifying different instances of possible interaction between the two. They also conclude that auditors are more likely to be those continuing to govern the processes and operations, as they know how to use the advances in technology and AI to improve auditing, and also, critically, their intuition and professional reasoning and scepticism cannot be easily replaced. The real-world applications of emerging technologies can enable auditors to collect corroborating evidence from audits more effectively, rapidly, reliably and comprehensively than ever before, but AI will not replace the auditor’s judgment, expertise and awareness of the sector, still as necessary as ever.

This research will, thus, be interesting reading for people operating in the audit industry and provide food for thought on future empirical studies into human–algorithm interaction and duality, for instance, in terms of trust, legal restrictions, ethical concerns, security and responsibility. It also suggests that there is the need to update educational curricula to place more attention on new technologies.

Al-Htaybat and von Alberti-Alhtaybat’s (2021) paper draws on Actor-Network theory, exploring the use of algorithms in the delivery industry through netnography, to check whether this reflects positively on organisational practices and on the customers’ and employees’ related perceptions of organisational performance.

In the organisation under analysis (a multinational organisation in the logistics sector based in the Middle East), algorithms appear to focus only on specific dimensions of performance, namely, estimating a delivery time slot accurately, setting up and observing a preferred and precise location for delivery and speed of delivery. This narrower focus appears to illustrate the importance that biases can take in an algorithm-dominated reality, suggesting that there is a higher level of attention, as an influence of the algorithm, on some objectives/activities rather than on others, which can cause bias, at the risk of distracting from a wider focus on long-term values and sustainability-related objectives. Also, the excessive focus on customers appears to crowd out attention on the employees’ work conditions. The authors observe that analytics would need to incorporate greater complexity and consider further organisational objectives and dimensions, such as long-term sustainability and the employees’ perspectives. Moreover, the study sets out the risks relating to the collection and management of psychographic data with the aim of customer profiling and micro targeting.

The two other papers focus more specifically on platforms and their potential for not only radically innovating industries but also bringing forward new forms of control over populations.

Grassi et al.’s (2022) paper highlights the potential of blockchains to bring about a decentralised financial system, where traditional intermediation is replaced by peer-to-peer interaction. Similarly to Tiron-Tudor and Deliu (2021), the authors of this study note the importance of understanding human–algorithm interaction and the role of human decisions in a setting which may come to be dominated by algorithms. Combining the analysis of publicly available secondary data with two focus group discussions, the study concludes that decentralised finance does not eliminate financial intermediation but enables it to be performed in new ways, where decentralisation may prevent any single actor from holding too much power or a monopoly. However, the balance between humans and algorithms cannot be ensured a priori, as decentralised finance solutions can range from those that require algorithms to play a dominant role, to those that enable greater human interaction by actively involving more people.

The authors point to three management implications of decentralisation that emerged from their results, in terms of governance, record-keeping and risk. Decentralisation of governance (decision-making) entails moving away from a financial system where a single entity or a restricted group of entities is entitled to make decisions to a financial system where no single entity has exclusive control over the markets, services or processes through which these services are delivered. In a decentralised financial system, the rules are embedded in software parameters that can be changed only if so agreed by a sufficient majority of users. The decentralisation of record-keeping, enabled by blockchains, refers to data being stored and accessed across broader numbers of users, instead of being held centrally, where there is limited or no control in the case of mistakes or cyberattacks. The decentralisation of risk-taking may carry a series of implications for the system’s stability, as risk is no longer centralised in financial intermediaries. These developments open a number of questions and challenges concerning the suitability of the supervisory financial system and its stability and exposure to risk, current legal mechanisms and enforcements, as well as interoperability considerations. This paper, thus, provides further thought-provoking considerations on the importance to take trust, control and transparency issues into consideration within the development and management of platforms,

Finally, Xiang’s (2022) paper takes a netnographic approach to explore how YouTube enables biopolitical control. The study shows that the video-sharing platform facilitates interaction between users but does so selectively, thus instituting a form of control over them. By creating the illusion of a marketplace, where success can be attained, it encourages the creation of user-generated content to maximise engagement. According to the author, platforms portray themselves as spaces where relationships can develop freely, but these relationships are in fact shaped by the platform’s own agenda, insofar as taking part in the platform’s economy is, in principle, open to all, but equal visibility is not necessarily ensured. This paper, thus, highlights the inherent contrast between the declared “freedom” and “openness” of platforms, and the underlying rigidity of their centrally managed protocols and the tension between decentralised agency and centralised controls in the digital world. This paper can, thus, provide a vivid picture of how biases may remain hidden, whilst creating an illusory high-trust environment and giving strong centralised control to providers, yet leaving very limited power to users.

The findings of these papers show that the strengths and risks of deploying algorithms are closely tied to the humans developing and using them and are also tied to the way these humans organise themselves. The papers in this special issue furthermore highlight the pervasiveness of processes of datafication, of the use of algorithms and the increasing relevance of data analytics in a plurality of industries, organisations and contexts. They additionally point to the many forms they take and uses they have, which are translated into a multiplicity of tools and media, changing the ways in which people interact, decide, are controlled, work and how performance is assessed and determined.

The contributions also provide a rich and nuanced account of both the strengths and potential and the risks and drawbacks of big data and analytics, algorithms and platforms.

At the core of their analyses lies a view that any study of algorithms, data analytics and platforms has at its centre a focus on humanity, as it is humans who define the features of the digital “tools” and media, humans who interact with these tools, who are supported by them in their daily activity and who benefit, or suffer from, their consequences. All the studies show clearly how data and digitalisation are a reflection of human will, agency, intentions and interaction. Overall, they convey the centrality of “humanity” in the current wave of digitalisation, as they signal that choices of design in analytics are driven by human capacity and human needs and, glaringly, to the fact that AI will only be able to support human judgement but will not be able to replace it.

However, these papers also point out that digital data and platforms intervene in human lives, organisational processes and operations, shaping what is considered as being relevant in decisions, what counts as performance, what is visible or invisible and the distribution of power among actors. They suggest the ways in which algorithms, AI, digital platforms and data analytics can be better managed and governed, but they also highlight a number of open, critical issues, and these will need to be explored further and addressed in the future, both in practice and in research. In particular, it is clear that, at times, these technologies create an illusion of “technical” neutrality and objectivity, openness, user-centredness and, thus, implicit trustworthiness, which are in stark contrast to biases and the newly re-centralised forms of power connected to ownership of data and platforms, capacity to analyse these data and who determines the “position” held by actors in their relevant field. These features point to the centrality of building expertise among the users of data and to strengthen their awareness of the inherent biases of algorithms, AI and digital platforms. This situation is becoming evermore critical as, alongside “simple” questions, the new technologies are increasingly being used to manage “wicked” problems and are so widening the divergence in values between the actors in play (and the power foisted upon them by their control over media, platforms, data and their capacity to analyse such data).

Finally, it is worth noting that, in exploring a new field, the authors of the papers in this special issue turned to newer traditional methodological approaches, bringing in netnography and cyber-ethnographies, focus groups and participatory observations, alongside more traditional thematic analyses. Their different approaches reflect the necessity to explore a lesser-known field and, in some cases, be able to capture the initial or expected consequences of new ways of looking and using data and digital platforms.

5. Concluding remarks

In Autumn 2018, we, the authors, started discussing the possibility of this special issue. One of us set the ball rolling, but it was easy to agree that there was the need, as the attention being paid by practitioners and academics to the potential of algorithms was soaring. However, we were also observing two trends that we felt were dangerous to knowledge across the board, in that scholars are concentrating excessively on technical aspects, often publishing theoretical or quantitative studies, and among practitioners, there is an excessive optimism with many being all too ready to believe that algorithms and AI can provide “solutions” to unsolved and complex problems. This special issue originated from the need to stimulate a debate on the link between machines and humans and also to provoke considerations on the value of qualitative research into this subject matter.

What has changed? We have gone through a pandemic, which did not throw our original objective off course but did make it even more poignant in the face of increased attention to the digital world and its potential. The shock created by the pandemic highlighted the variation in how well (or poorly) individuals and organisations are prepared for the digital age, and even more so for AI, not only technically but also organisationally and from the standpoint of managing the problems of bias, control, equity and management that we highlighted in Section 2. The articles in this special issue give us the material to enter into the deeper sphere of algorithmic applications, in all their diversity and variety. No paper waves a magic wand or provides a “solution”; however, all papers set out conclusions with a rich body of contextualisation that opens the debate up to further study. These ensuing studies will, in turn, play a crucial part in the upcoming expected growth of applications, as governments handling the fallout of the pandemic have been and still are allocating previously unimaginable amounts of resources to economic recovery and digital innovation.

We will conclude this editorial by taking the conceptual scheme set out in Figure 1 to frame the research challenges for any qualitative research, considering the multi-faceted complexity linked to the challenges highlighted in Section 2 and as a consequence of the results set out in the studies included in this special issue.

Firstly, we are placing what may sound obvious at the heart of our concluding remarks: how do we define the research problem we wish to address. The objective must be sufficiently broad to influence the advancement of knowledge; conversely, we must avoid the temptation to include all possible factors. We could say that at the core of all methodological challenges in general lies a research question of reflexive breadth, to echo Adler's use of reflexive (2001). Reflexive breadth pushes for a bolder goal, but there must also be rationality in assessing how far such breadth goes, to prevent it from becoming a liability, that is, the risk of offering conclusions that are too general and thus obvious. How should this be done? No solution or machine is able (so far) to replace the human mind or the researcher’s experience, but the framework proposed can help future researchers investigate the issues at stake set out in this special issue. Without being over-logical, in our view, the variable that should be addressed is the common element in the variables for the two axes (characteristics of the problem and maturity of institutions).

Secondly, and connected to the research question and unit of analysis, is the method. The pandemic has opened up new horizons where we were previously too cautious to venture, such as online interviews. Qualitative research has always been based on the triangulation of sources, with interviews in the field (so face-to-face) as a key element. The papers in this issue have drawn attention to how placing “digital” as the core topic can give rise to a greater wealth of information, not only for the purposes of online interviews or meetings (disdained before, welcomed now) but also to include the digital traces of end users, who use and/or benefit from digital applications and AI. The temptation to include new sources, sometimes led by the researcher’s curiosity to explore new ground, should be carefully analysed to see if they fit in with the research question and the position of the research question itself in the framework. Wicked and divergent contexts (at the top right of the framework) need a larger set of sources to be able to cover, for example, the actors’ many perspectives. The easiest contexts (at the bottom left of the framework) reduce this need; hence, they are the ideal ground, to give an example, for longitudinal studies.

Thirdly, practices for developing and using algorithms are in rapid expansion. It seems that there is, instead, a delay in the development of an institutional context in which to use the algorithms – we are not yet in a phase of maturity. It is almost inevitable that institutions will lag behind in algorithmic decision-making. The future scope and impact are surrounded by uncertainties – meaning that it is often unclear what new institutions (rules/regulations/practices) will be necessary – within private organisations and in government.

Fourthly, some of the articles in this special issue include an appeal for human–algorithm interaction. Professional intuition and the experts’ tacit knowledge are difficult to express formally within an algorithm and remain of great significance, given that they are the foundations of interaction between algorithms and professional intuition (Tudor). Von Alberti observes that algorithms sometimes only relate to one certain and limited aspect of decision-making, and that there is also the importance of human judgement. Grassi et al. show that the combining of human knowledge and investigation and algorithmic decision-making into procedure cannot always be determined ex ante early in the decision-making process.

Starting from this basis, the boundaries of the relationship between human and algorithmic decision-making can be outlined along the following scale:

  • Replacement – algorithmic decision-making can take over human decision-making completely.

  • Partial substitution – algorithmic decision-making can be used in part of the decision-making process, while the other decisions are made by humans.

  • Coexistence – decision-making can be made together by algorithms and humans, using their professional intuition. It could be possible to tap into the potential tension between algorithmic and human decision-making, where the algorithms’ decision-making is checked by humans and vice versa. This could engender further learning processes, with both algorithmic and human decision-making improving as a result of this tension.

  • Human decision-making dominates – in certain decisions, it cannot be replaced by algorithms.

Figures

Framing challenges

Figure 1.

Framing challenges

References

Adadi, A. and Berrada, M. (2018), “Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)”, IEEE Access, Vol. 6, pp. 52138-52160, doi: 10.1109/ACCESS.2018.2870052.

Adams, T.L., Kirkpatrick, I., Tolbert, P.S. and Waring, J. (2020), “From protective to connective professionalism: Quo Vadis professional exclusivity?”, Journal of Professions and Organization, Vol. 7 No. 2, pp. 234-245.

Agostino, D., Arnaboldi, M. and Lema, M.D. (2021a), “New development: COVID-19 as an accelerator of digital transformation in public service delivery”, Public Money and Management, Vol. 41 No. 1, pp. 69-72.

Agostino, D., Saliterer, I. and Steccolini, I. (2021b), “Digitalization, accounting and accountability: a literature review and reflections on future research in public services”, Financial Accountability and Management, Vol. 38 No. 2, 3 September.

Alford, J. and Head, B.W. (2017), “Wicked and less wicked problems: a typology and a contingency framework”, Policy and Society, Vol. 36 No. 3, pp. 397-413, doi: 10.1080/14494035.2017.1361634.

Al-Htaybat, K. and von Alberti-Alhtaybat, L. (2021), “Enhancing delivery: algorithms supporting performance management in the logistics sector”, Qualitative Research in Accounting and Management, doi: 10.1108/QRAM-04-2021-0063.

Arnaboldi, M. (2018), “The missing variable in big data for social sciences: the decision-maker”, Sustainability, Vol. 10 No. 10, p. 3415.

Arnaboldi, M., Busco, C. and Cuganesan, S. (2017), “Accounting, accountability, social media and big data: revolution or hype?”, Accounting, Auditing and Accountability Journal, Vol. 30 No. 4, pp. 762-776.

Barredo Arrieta, A., Díaz-Rodríguez, N., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., … Herrera, F. (2020), “Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI”, Information Fusion, Vol. 58, pp. 82-115, doi: 10.1016/j.inffus.2019.12.012.

Boyd, D. and Crawford, K. (2012), “Critical questions for big data: proviactions for a cultural, technological, and scholarly phenomenon”, Information, Communication and Society, Vol. 15 No. 5, pp. 662-679.

Datta, A. and Tschantz, M.C. (2015), “Automated experiments on ad privacy settings: a tale of opacity, choice, and discrimination”, arXiv preprint arXiv, 17 March, available at: https://arxiv.org/abs/1408.6491

De Bruijn, H. (2002), “Performance measurement in the public sector: strategies to cope with the risks of performance measurement”, International Journal of Public Sector Management, Vol. 15 No. 7, pp. 578-594.

De Bruijn, H. (2021), The Governance of Privacy, Amsterdam University Press, 1st December.

De Graaf, G. and Van der Wal, Z. (2010), “Managing conflicting values in public policy”, The American Review of Public Administration, Vol. 40 No. 6, pp. 623-630.

Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., … Holzinger, A. (2018), “Explainable AI: the new 42?”, International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Springer, Cham, pp. 295-303.

Grassi, L., Lanfranchi, D., Faes, A. and Renga, F.M. (2022), “Do we still need financial intermediation? The case of decentralized finance – DeFi”, Qualitative Research in Accounting and Management, doi: 10.1108/QRAM-03-2021-0051.

Hanelt, A., Bohnsack, R., Marz, D. and Antunes Marante, C. (2021), “A systematic review of the literature on digital transformation: insights and implications for strategy and organizational change”, Journal of Management Studies, Vol. 58 No. 5, pp. 1159-1197.

Höchtl, J., Parycek, P. and Schollhammer, R. (2016), “Big data in the policy cycle: policy decision making in the digital era”, Journal of Organizational Computing and Electronic Commerce, Vol. 26 Nos 1/2, p. 147.

Janssen, M., van der Voort, H. and Wahyudi, A. (2017), “Factors influencing big data decision-making quality”, Journal of Business Research, Vol. 70, pp. 338-345.

Kitchin, R. (2014), The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences, SAGE Publications Ltd., London.

Kitchin, R. (2021), Data Lives: How Data Are Made and Shape Our World, Policy Press.

Lanzolla, G., Lorenz, A., Miron-Spektor, E., Schilling, M., Solinas, G. and Tucci, C.L. (2020), “Digital transformation: what is new if anything? Emerging patterns and management research”, Academy of Management Discoveries, Vol. 6 No. 3, pp. 341-350.

Leenes, R. (2016), “De voorspellende overheid”, Bestuurskunde, Vol. 25 No. 1, pp. 38-43.

Lismont, J., Vanthienen, J., Baesens, B. and Lemahieu, W. (2015), “The role of the data scientist in the modern organization”, In European Conference on Operational Research, 12-15 July.

Mayer-Schönberger, V. and Cukier, K. (2013), “Big data. A revolution that will transform how we live, work and think”, 1st October, John Murray Publishers.

Noordegraaf, M. (2020), “Protective or connective professionalism? How connected professionals can (still) act as autonomous and authoritative experts”, Journal of Professions and Organization, Vol. 7 No. 2, pp. 205-223.

Okwir, S., Nudurupati, S., Ginieis, M. and Angelis, J. (2018), “Performance measurement and management systems: a perspective from complexity theory”, International Journal of Management Reviews, Vol. 20 No. 3, pp. 731-754.

Preece, A. Harborne, D. Braines, D. Tomsett, R. and Chakraborty, S. (2018), “Stakeholders in explainable AI. AAAI FSS-18”, Artificial Intelligence in Government and Public Sector.

Soncin, M. and Cannistrà, M. (2021), “Data analytics in education: are schools on the long and winding road?”, Qualitative Research in Accounting and Management, doi: 10.1108/QRAM-04-2021-0058.

Steenhuisen, B. and van Eeten, M. (2008), “Invisible trade-offs of public values: inside Dutch railways”, Public Money and Management, Vol. 28 No. 3, pp. 147-152, doi: 10.1111/j.1467-9302.2008.00636.x.

Thelen, K. and Steinmo, S. (1992), “Historical institutionalism in comparative politics”, in Steinmo, S., Thelen, K., Longstreth, F. (Eds), Structuring Politics: Historical Institutionalism in Comparative Analysis, Cambridge University Press.

Tiron-Tudor, A. and Deliu, D. (2021), “Reflections on the human-algorithm complex duality perspectives in the auditing process”, Qualitative Research in Accounting and Management, doi: 10.1108/QRAM-04-2021-0059.

Uprichard, E. and Carrigan, M. (2015), “Most big data is social data – the analytics need serious interrogation”, Impact of Social Sciences Blog, available at: http://blogs.lse.ac.uk/impactofsocialsciences/2015/02/12/philosophy-of-data-science-emma-uprichard (accessed 28 December 2017).

Van der Voort, H., Klievink, B., Arnaboldi, M. and Meijer, A. (2018), “Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making?”, Government Information Quarterly, Vol. 36 No. 1, doi: 10.1016/j.giq.2018.10.011.

Xiang, Y. (2022), “YouTube and the protocological control of platform organisations”, Qualitative Research in Accounting and Management, doi: 10.1108/QRAM-04-2021-0060.

Zuiderwijk, A., Chen, Y. and Salem, F. (2021), “Implications of the use of artificial intelligence in public governance: a systematic literature review and a research agenda”, Government Information Quarterly, Vol. 38 No. 3, p. 101577, doi: 10.1016/j.giq.2021.101577.

Corresponding author

Michela Arnaboldi can be contacted at: michela.arnaboldi@polimi.it

Related articles