Search results

1 – 10 of over 2000
Article
Publication date: 19 May 2023

Weimo Li, Yaobin Lu, Peng Hu and Sumeet Gupta

Algorithms are widely used to manage various activities in the gig economy. Online car-hailing platforms, such as Uber and Lyft, are exemplary embodiments of such algorithmic

Abstract

Purpose

Algorithms are widely used to manage various activities in the gig economy. Online car-hailing platforms, such as Uber and Lyft, are exemplary embodiments of such algorithmic management, where drivers are managed by algorithms for task allocation, work monitoring and performance evaluation. Despite employing substantially, the platforms face the challenge of maintaining and fostering drivers' work engagement. Thus, this study aims to examine how the algorithmic management of online car-hailing platforms affects drivers' work engagement.

Design/methodology/approach

Drawing on the transactional theory of stress, the authors examined the effects of algorithmic monitoring and fairness on online car-hailing drivers' work engagement and revealed the mediation effects of challenge-hindrance appraisals. Based on survey data collected from 364 drivers, the authors' hypotheses were examined using partial least squares structural equation modeling (PLS-SEM). The authors also applied path comparison analyses to further compare the effects of algorithmic monitoring and fairness on the two types of appraisals.

Findings

This study finds that online car-hailing drivers' challenge-hindrance appraisals mediate the relationship between algorithmic management characteristics and work engagement. Algorithmic monitoring positively affects both challenge and hindrance appraisals in online car-hailing drivers. However, algorithmic fairness promotes challenge appraisal and reduces hindrance appraisal. Consequently, challenge and hindrance appraisals lead to higher and lower work engagement, respectively. Further, the additional path comparison analysis showed that the hindering effect of algorithmic monitoring exceeds its challenging effect, and the challenge-promoting effect of algorithmic fairness is greater than the algorithm's hindrance-reducing effect.

Originality/value

This paper reveals the underlying mechanisms concerning how algorithmic monitoring and fairness affect online car-hailing drivers' work engagement and fills the gap in the research on algorithmic management in the context of online car-hailing platforms. The authors' findings also provide practical guidance for online car-hailing platforms on how to improve the platforms' algorithmic management systems.

Article
Publication date: 17 October 2023

Helmi Issa, Rachid Jabbouri and Rock-Antoine Mehanna

The exponential growth of artificial intelligence (AI) technologies, coupled with advanced algorithms and increased computational capacity, has facilitated their widespread…

Abstract

Purpose

The exponential growth of artificial intelligence (AI) technologies, coupled with advanced algorithms and increased computational capacity, has facilitated their widespread adoption in various industries. Among these, the financial technology (FinTech) sector has been significantly impacted by AI-based decision-making systems. Nevertheless, a knowledge gap remains regarding the intricate mechanisms behind the micro-decision-making process employed by AI algorithms. This paper aims to discuss the aforementioned issue.

Design/methodology/approach

This research utilized a sequential mixed-methods research approach and obtained data through 18 interviews conducted with a single FinTech firm in France, as well as 148 e-surveys administered to participants employed at different FinTechs located throughout Europe.

Findings

Three main themes (ambidexterity, data sovereignty and model explainability) emerge as underpinnings for effective AI micro decision-making in FinTechs.

Practical implications

This research aims to minimize ambiguity by putting forth a proposition for a model that functions as an “infrastructural” layer, providing a more comprehensive illumination of the micro-decisions made by AI.

Originality/value

This research pioneers as the very first empirical exploration delving into the essential factors that underpin effective AI micro-decisions in FinTechs.

Details

Management Decision, vol. 61 no. 11
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 18 August 2023

Anniek Brink, Louis-David Benyayer and Martin Kupp

Prior research has revealed that a large share of managers is reluctant towards the use of artificial intelligence (AI) in decision-making. This aversion can be caused by several…

641

Abstract

Purpose

Prior research has revealed that a large share of managers is reluctant towards the use of artificial intelligence (AI) in decision-making. This aversion can be caused by several factors, including individual drivers. The purpose of this paper is to better understand the extent to which individual factors influence managers’ attitudes towards the use of AI and, based on these findings, to propose solutions for increasing AI adoption.

Design/methodology/approach

The paper builds on prior research, especially on the factors driving the adoption of AI in companies. In addition, data was collected by means of 16 expert interviews using a semi-structured interview guideline.

Findings

The study concludes on four groups of individual factors ranked according to their importance: demographics, familiarity, psychology and personality. Moreover, the findings emphasized the importance of communication and training, explainability and transparency and participation in the process to foster the adoption of AI in decision-making.

Research limitations/implications

The paper identifies four ways to foster AI integration for organizational decision-making as areas for further empirical analysis by business researchers.

Practical implications

This paper offers four ways to foster AI adoption for organizational decision-making: explaining the benefits and training the more adverse categories, explaining how the algorithms work and being transparent about the shortcomings, striking a good balance between automated and human-made decisions, and involving users in the design process.

Social implications

The study concludes on four groups of individual factors ranked according to their importance: demographics, familiarity, psychology and personality. Moreover, the findings emphasized the importance of communication and training, explainability and transparency and participation in the process to foster the adoption of AI in decision-making.

Originality/value

This study is one of few to conduct qualitative research into the individual factors driving usage intention among managers; hence, providing more in-depth insights about managers’ attitudes towards algorithmic decision-making. This research could serve as guidance for developers developing algorithms and for managers implementing and using algorithms in organizational decision-making.

Details

Journal of Business Strategy, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0275-6668

Keywords

Book part
Publication date: 25 March 2021

Tayfun Kasapoglu and Anu Masso

Purpose: This study explores the perspectives of data experts (DXs) and refugees on the algorithms used by law enforcement officers and focuses on emerging insecurities. The…

Abstract

Purpose: This study explores the perspectives of data experts (DXs) and refugees on the algorithms used by law enforcement officers and focuses on emerging insecurities. The authors take police risk-scoring algorithms (PRSA) as a proxy to examine perceptions on algorithms that make/assist sensitive decisions affecting people’s lives.

Methodology/approach: In-depth interviews were conducted with DXs (24) in Estonia and refugees (19) in Estonia and Turkey. Using projective techniques, the interviewees were provided a simple definition of PRSA and a photo to encourage them to share their perspectives. The authors applied thematic analysis to the data combining manual and computer-aided techniques using the Maxqda software.

Findings: The study revealed that the perspectives on PRSA may change depending on the individual’s position relative to the double security paradox surrounding refugees. The use of algorithms for a sensitive matter such as security raises concerns about potential social outcomes, intentions of authorities and fairness of the algorithms. The algorithms are perceived to construct further social borders in society and justify extant ideas about marginalized groups.

Research limitations: The study made use of a small population sample and aimed at exploring perspectives of refugees and DXs by taking PRSA as the case without targeting representativeness.

Originality/value: The study is based on a double security paradox where refugees who escape their homelands due to security concerns are also considered to be national security threats. DXs, on the other hand, represent a group that takes an active role in decisions about who is at risk and who is risky. The study provides insights on two groups of people who are engaged with algorithms in different ways.

Details

Theorizing Criminality and Policing in the Digital Media Age
Type: Book
ISBN: 978-1-83909-112-4

Keywords

Open Access
Article
Publication date: 27 September 2022

Hanna Kinowska and Łukasz Jakub Sienkiewicz

Existing literature on algorithmic management practices – defined as autonomous data-driven decision making in people's management by adoption of self-learning algorithms and…

6251

Abstract

Purpose

Existing literature on algorithmic management practices – defined as autonomous data-driven decision making in people's management by adoption of self-learning algorithms and artificial intelligence – suggests complex relationships with employees' well-being in the workplace. While the use of algorithms can have positive impacts on people-related decisions, they may also adversely influence job autonomy, perceived justice and – as a result – workplace well-being. Literature review revealed a significant gap in empirical research on the nature and direction of these relationships. Therefore the purpose of this paper is to analyse how algorithmic management practices directly influence workplace well-being, as well as investigating its relationships with job autonomy and total rewards practices.

Design/methodology/approach

Conceptual model of relationships between algorithmic management practices, job autonomy, total rewards and workplace well-being has been formulated on the basis of literature review. Proposed model has been empirically verified through confirmatory analysis by means of structural equation modelling (SEM CFA) on a sample of 21,869 European organisations, using data collected by Eurofound and Cedefop in 2019, with the focus of investigating the direct and indirect influence of algorithmic management practices on workplace well-being.

Findings

This research confirmed a moderate, direct impact of application of algorithmic management practices on workplace well-being. More importantly the authors found out that this approach has an indirect influence, through negative impact on job autonomy and total rewards practices. The authors observed significant variation in the level of influence depending on the size of the organisation, with the decreasing impacts of algorithmic management on well-being and job autonomy for larger entities.

Originality/value

While the influence of algorithmic management on various workplace practices and effects is now widely discussed, the empirical evidence – especially for traditional work contexts, not only gig economy – is highly limited. The study fills this gap and suggests that algorithmic management – understood as an automated decision-making vehicle – might not always lead to better, well-being focused, people management in organisations. Academic studies and practical applications need to account for possible negative consequences of algorithmic management for the workplace well-being, by better reflecting complex nature of relationships between these variables.

Details

Information Technology & People, vol. 36 no. 8
Type: Research Article
ISSN: 0959-3845

Keywords

Content available
Book part
Publication date: 11 December 2023

Mihalis Kritikos

Abstract

Details

Ethical AI Surveillance in the Workplace
Type: Book
ISBN: 978-1-83753-772-3

Article
Publication date: 23 September 2021

Donghee Shin, Azmat Rasul and Anestis Fotiadis

As algorithms permeate nearly every aspect of digital life, artificial intelligence (AI) systems exert a growing influence on human behavior in the digital milieu. Despite its…

2380

Abstract

Purpose

As algorithms permeate nearly every aspect of digital life, artificial intelligence (AI) systems exert a growing influence on human behavior in the digital milieu. Despite its popularity, little is known about the roles and effects of algorithmic literacy (AL) on user acceptance. The purpose of this study is to contextualize AL in the AI environment by empirically examining the role of AL in developing users' information processing in algorithms. The authors analyze how users engage with over-the-top (OTT) platforms, what awareness the user has of the algorithmic platform and how awareness of AL may impact their interaction with these systems.

Design/methodology/approach

This study employed multiple-group equivalence methods to compare two group invariance and the hypotheses concerning differences in the effects of AL. The method examined how AL helps users to envisage, understand and work with algorithms, depending on their understanding of the control of the information flow embedded within them.

Findings

Our findings clarify what functions AL plays in the adoption of OTT platforms and how users experience algorithms, particularly in contexts where AI is used in OTT algorithms to provide personalized recommendations. The results point to the heuristic functions of AL in connection with its ties in trust and ensuing attitude and behavior. Heuristic processes using AL strongly affect the credibility of recommendations and the way users understand the accuracy and personalization of results. The authors argue that critical assessment of AL must be understood not just about how it is used to evaluate the trust of service, but also regarding how it is performatively related in the modeling of algorithmic personalization.

Research limitations/implications

The relation of AL and trust in an algorithm lends strategic direction in developing user-centered algorithms in OTT contexts. As the AI industry has faced decreasing credibility, the role of user trust will surely give insights on credibility and trust in algorithms. To better understand how to cultivate a sense of literacy regarding algorithm consumption, the AI industry could provide examples of what positive engagement with algorithm platforms looks like.

Originality/value

User cognitive processes of AL provide conceptual frameworks for algorithm services and a practical guideline for the design of OTT services. Framing the cognitive process of AL in reference to trust has made relevant contributions to the ongoing debate surrounding algorithms and literacy. While the topic of AL is widely recognized, empirical evidence on the effects of AL is relatively rare, particularly from the user's behavioral perspective. No formal theoretical model of algorithmic decision-making based on the dual processing model has been researched.

Article
Publication date: 9 September 2022

Enrico Bracci

Governments are increasingly turning to artificial intelligence (AI) algorithmic systems to increase efficiency and effectiveness of public service delivery. While the diffusion…

1059

Abstract

Purpose

Governments are increasingly turning to artificial intelligence (AI) algorithmic systems to increase efficiency and effectiveness of public service delivery. While the diffusion of AI offers several desirable benefits, caution and attention should be posed to the accountability of AI algorithm decision-making systems in the public sector. The purpose of this paper is to establish the main challenges that an AI algorithm might bring about to public service accountability. In doing so, the paper also delineates future avenues of investigation for scholars.

Design/methodology/approach

This paper builds on previous literature and anecdotal cases of AI applications in public services, drawing on streams of literature from accounting, public administration and information technology ethics.

Findings

Based on previous literature, the paper highlights the accountability gaps that AI can bring about and the possible countermeasures. The introduction of AI algorithms in public services modifies the chain of responsibility. This distributed responsibility requires an accountability governance, together with technical solutions, to meet multiple accountabilities and close the accountability gaps. The paper also delineates a research agenda for accounting scholars to make accountability more “intelligent”.

Originality/value

The findings of the paper shed new light and perspective on how public service accountability in AI should be considered and addressed. The results developed in this paper will stimulate scholars to explore, also from an interdisciplinary perspective, the issues public service organizations are facing to make AI algorithms accountable.

Details

Accounting, Auditing & Accountability Journal, vol. 36 no. 2
Type: Research Article
ISSN: 0951-3574

Keywords

Open Access
Article
Publication date: 21 June 2022

Othmar Manfred Lehner, Kim Ittonen, Hanna Silvola, Eva Ström and Alena Wührleitner

This paper aims to identify ethical challenges of using artificial intelligence (AI)-based accounting systems for decision-making and discusses its findings based on Rest's…

26325

Abstract

Purpose

This paper aims to identify ethical challenges of using artificial intelligence (AI)-based accounting systems for decision-making and discusses its findings based on Rest's four-component model of antecedents for ethical decision-making. This study derives implications for accounting and auditing scholars and practitioners.

Design/methodology/approach

This research is rooted in the hermeneutics tradition of interpretative accounting research, in which the reader and the texts engage in a form of dialogue. To substantiate this dialogue, the authors conduct a theoretically informed, narrative (semi-systematic) literature review spanning the years 2015–2020. This review's narrative is driven by the depicted contexts and the accounting/auditing practices found in selected articles are used as sample instead of the research or methods.

Findings

In the thematic coding of the selected papers the authors identify five major ethical challenges of AI-based decision-making in accounting: objectivity, privacy, transparency, accountability and trustworthiness. Using Rest's component model of antecedents for ethical decision-making as a stable framework for our structure, the authors critically discuss the challenges and their relevance for a future human–machine collaboration within varying agency between humans and AI.

Originality/value

This paper contributes to the literature on accounting as a subjectivising as well as mediating practice in a socio-material context. It does so by providing a solid base of arguments that AI alone, despite its enabling and mediating role in accounting, cannot make ethical accounting decisions because it lacks the necessary preconditions in terms of Rest's model of antecedents. What is more, as AI is bound to pre-set goals and subjected to human made conditions despite its autonomous learning and adaptive practices, it lacks true agency. As a consequence, accountability needs to be shared between humans and AI. The authors suggest that related governance as well as internal and external auditing processes need to be adapted in terms of skills and awareness to ensure an ethical AI-based decision-making.

Details

Accounting, Auditing & Accountability Journal, vol. 35 no. 9
Type: Research Article
ISSN: 0951-3574

Keywords

Article
Publication date: 21 December 2021

Gianclaudio Malgieri

This study aims to discover the legal borderline between licit online marketing and illicit privacy-intrusive and manipulative marketing, considering in particular consumers’…

1000

Abstract

Purpose

This study aims to discover the legal borderline between licit online marketing and illicit privacy-intrusive and manipulative marketing, considering in particular consumers’ expectations of privacy.

Design/methodology/approach

A doctrinal legal research methodology is applied throughout with reference to the relevant legislative frameworks. In particular, this study analyzes the European Union (EU) data protection law [General Data Protection Regulation (GDPR)] framework (as it is one of the most advanced privacy laws in the world, with strong extra-territorial impact in other countries and consequent risks of high fines), as compared to privacy scholarship on the field and extract a compliance framework for marketers.

Findings

The GDPR is a solid compliance framework that can help to distinguish licit marketing from illicit one. It brings clarity through four legal tests: fairness test, lawfulness test, significant effect test and the high-risk test. The performance of these tests can be beneficial to consumers and marketers in particular considering that meeting consumers’ expectation of privacy can enhance their trust. A solution for marketers to respect and leverage consumers’ privacy expectations is twofold: enhancing critical transparency and avoiding the exploitation of individual vulnerabilities.

Research limitations/implications

This study is limited to the European legal framework scenario and to theoretical analysis. Further research is necessary to investigate other legal frameworks and to prove this model in practice, measuring not only the consumers’ expectation of privacy in different contexts but also the practical managerial implications of the four GDPR tests for marketers.

Originality/value

This study originally contextualizes the most recent privacy scholarship on online manipulation within the EU legal framework, proposing an easy and accessible four-step test and twofold solution for marketers. Such a test might be beneficial both for marketers and for consumers’ expectations of privacy.

Details

Journal of Consumer Marketing, vol. 40 no. 2
Type: Research Article
ISSN: 0736-3761

Keywords

1 – 10 of over 2000