Search results

1 – 10 of over 7000
Article
Publication date: 8 March 2024

Sarah Jerasa and Sarah K. Burriss

Artificial intelligence (AI) has become increasingly important and influential in reading and writing. The influx of social media digital spaces, like TikTok, has also shifted the…

Abstract

Purpose

Artificial intelligence (AI) has become increasingly important and influential in reading and writing. The influx of social media digital spaces, like TikTok, has also shifted the ways multimodal composition takes place alongside AI. This study aims to argue that within spaces like TikTok, human composers must attend to the ways they write for, with and against the AI-powered algorithm.

Design/methodology/approach

Data collection was drawn from a larger study on #BookTok (the TikTok subcommunity for readers) that included semi-structured interviews including watching and reflecting on a TikTok they created. The authors grounded this study in critical posthumanist literacies to analyze and open code five #BookTok content creators’ interview transcripts. Using axial coding, authors collaboratively determined three overarching and entangled themes: writing for, with and against.

Findings

Findings highlight the nuanced ways #BookTokers consider the AI algorithm in their compositional choices, namely, in the ways how they want to disseminate their videos to a larger audience or more niche-focused community. Throughout the interviews, participants revealed how the AI algorithm was situated differently as both audience member, co-author and censor.

Originality/value

This study is grounded in critical posthumanist literacies and explores composition as a joint accomplishment between humans and machines. The authors argued that it is necessary to expand our human-centered notions of what it means to write for an audience, to co-author and to resist censorship or gatekeeping.

Details

English Teaching: Practice & Critique, vol. 23 no. 1
Type: Research Article
ISSN: 1175-8708

Keywords

Article
Publication date: 9 September 2022

Enrico Bracci

Governments are increasingly turning to artificial intelligence (AI) algorithmic systems to increase efficiency and effectiveness of public service delivery. While the diffusion…

1048

Abstract

Purpose

Governments are increasingly turning to artificial intelligence (AI) algorithmic systems to increase efficiency and effectiveness of public service delivery. While the diffusion of AI offers several desirable benefits, caution and attention should be posed to the accountability of AI algorithm decision-making systems in the public sector. The purpose of this paper is to establish the main challenges that an AI algorithm might bring about to public service accountability. In doing so, the paper also delineates future avenues of investigation for scholars.

Design/methodology/approach

This paper builds on previous literature and anecdotal cases of AI applications in public services, drawing on streams of literature from accounting, public administration and information technology ethics.

Findings

Based on previous literature, the paper highlights the accountability gaps that AI can bring about and the possible countermeasures. The introduction of AI algorithms in public services modifies the chain of responsibility. This distributed responsibility requires an accountability governance, together with technical solutions, to meet multiple accountabilities and close the accountability gaps. The paper also delineates a research agenda for accounting scholars to make accountability more “intelligent”.

Originality/value

The findings of the paper shed new light and perspective on how public service accountability in AI should be considered and addressed. The results developed in this paper will stimulate scholars to explore, also from an interdisciplinary perspective, the issues public service organizations are facing to make AI algorithms accountable.

Details

Accounting, Auditing & Accountability Journal, vol. 36 no. 2
Type: Research Article
ISSN: 0951-3574

Keywords

Article
Publication date: 24 October 2021

Adriana Tiron-Tudor and Delia Deliu

Algorithms, artificial intelligence (AI), machines, and all emerging digital technologies disrupt traditional auditing, raising many questions and debates. One of the central…

1867

Abstract

Purpose

Algorithms, artificial intelligence (AI), machines, and all emerging digital technologies disrupt traditional auditing, raising many questions and debates. One of the central issues of this debate is the human-algorithms complex duality, which focuses on this investigation. This study aims to investigate the algorithms’ penetration in auditing activities, with a specific focus of a future scenario on the human-algorithms interaction in performing audits as intelligent teams.

Design/methodology/approach

The research uses a qualitative reflexive thematic analysis, taking into consideration the academic literature, as well as professional reports and websites of the “Big Four” audit firms and internationally recognized accounting bodies.

Findings

The results debate the complex duality between algorithms and human-based actions in the institutional settings of auditing activities by highlighting the actual stage of algorithms, machines and AI emergence in audit and providing real-life examples of their use in the audit. Furthermore, they emphasize the strengths and weaknesses of algorithms compared to human beings. Based on the results, a discussion on the human-algorithms interaction from the lens of the Human-in-the-Loop (HITL) approach concludes that the Auditor-Governing-the-Loop may be a possible scenario for the future of the auditing profession.

Research limitations/implications

This study is exploratory, investigating academia and practitioners’ written debates, analyzes and reports, limiting its applicability. Nonetheless, the paper adds to the ongoing discussion on emerging technologies and auditing research. Finally, the authors address some potential biases associated with the extended use of algorithms and discuss future research implications. Future research should empirically test how the human-algorithms tandem is working and how AI and other emerging technologies will affect auditing activities and the auditing profession.

Practical implications

The study provides valuable insights for audit firms, auditors, professional organizations and standard-setters, and regulators revealing the implication of algorithms’ penetration in auditing activities from the human-algorithms complex duality perspective. Moreover, the academic education and research implications are highlighted, in terms of updating the educational curriculum by including the new technologies issues, as well as the need for further research investigations concerning the human-algorithms interactions issues as, for example, trust, legal restrictions, ethical concerns, security and responsibility.

Originality/value

The research uses HITL as a novel paradigm for responsible AI development in auditing. The study points to the strategic value of a HITL pattern for organizational reflexivity that, according to the study, ensures that the algorithm’s output meets the audit organization’s requirements and changes in the environment.

Details

Qualitative Research in Accounting & Management, vol. 19 no. 3
Type: Research Article
ISSN: 1176-6093

Keywords

Article
Publication date: 23 September 2021

Donghee Shin, Azmat Rasul and Anestis Fotiadis

As algorithms permeate nearly every aspect of digital life, artificial intelligence (AI) systems exert a growing influence on human behavior in the digital milieu. Despite its…

2367

Abstract

Purpose

As algorithms permeate nearly every aspect of digital life, artificial intelligence (AI) systems exert a growing influence on human behavior in the digital milieu. Despite its popularity, little is known about the roles and effects of algorithmic literacy (AL) on user acceptance. The purpose of this study is to contextualize AL in the AI environment by empirically examining the role of AL in developing users' information processing in algorithms. The authors analyze how users engage with over-the-top (OTT) platforms, what awareness the user has of the algorithmic platform and how awareness of AL may impact their interaction with these systems.

Design/methodology/approach

This study employed multiple-group equivalence methods to compare two group invariance and the hypotheses concerning differences in the effects of AL. The method examined how AL helps users to envisage, understand and work with algorithms, depending on their understanding of the control of the information flow embedded within them.

Findings

Our findings clarify what functions AL plays in the adoption of OTT platforms and how users experience algorithms, particularly in contexts where AI is used in OTT algorithms to provide personalized recommendations. The results point to the heuristic functions of AL in connection with its ties in trust and ensuing attitude and behavior. Heuristic processes using AL strongly affect the credibility of recommendations and the way users understand the accuracy and personalization of results. The authors argue that critical assessment of AL must be understood not just about how it is used to evaluate the trust of service, but also regarding how it is performatively related in the modeling of algorithmic personalization.

Research limitations/implications

The relation of AL and trust in an algorithm lends strategic direction in developing user-centered algorithms in OTT contexts. As the AI industry has faced decreasing credibility, the role of user trust will surely give insights on credibility and trust in algorithms. To better understand how to cultivate a sense of literacy regarding algorithm consumption, the AI industry could provide examples of what positive engagement with algorithm platforms looks like.

Originality/value

User cognitive processes of AL provide conceptual frameworks for algorithm services and a practical guideline for the design of OTT services. Framing the cognitive process of AL in reference to trust has made relevant contributions to the ongoing debate surrounding algorithms and literacy. While the topic of AL is widely recognized, empirical evidence on the effects of AL is relatively rare, particularly from the user's behavioral perspective. No formal theoretical model of algorithmic decision-making based on the dual processing model has been researched.

Content available
Article
Publication date: 14 March 2023

Paula Hall and Debbie Ellis

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has…

3019

Abstract

Purpose

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.

Design/methodology/approach

A comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.

Findings

Most previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).

Originality/value

This systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.

Peer review

The peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452

Details

Online Information Review, vol. 47 no. 7
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 27 July 2021

Anahita Farhang Ghahfarokhi, Taha Mansouri, Mohammad Reza Sadeghi Moghaddam, Nila Bahrambeik, Ramin Yavari and Mohammadreza Fani Sani

The best algorithm that was implemented on this Brazilian dataset was artificial immune system (AIS) algorithm. But the time and cost of this algorithm are high. Using asexual…

Abstract

Purpose

The best algorithm that was implemented on this Brazilian dataset was artificial immune system (AIS) algorithm. But the time and cost of this algorithm are high. Using asexual reproduction optimization (ARO) algorithm, the authors achieved better results in less time. So the authors achieved less cost in a shorter time. Their framework addressed the problems such as high costs and training time in credit card fraud detection. This simple and effective approach has achieved better results than the best techniques implemented on our dataset so far. The purpose of this paper is to detect credit card fraud using ARO.

Design/methodology/approach

In this paper, the authors used ARO algorithm to classify the bank transactions into fraud and legitimate. ARO is taken from asexual reproduction. Asexual reproduction refers to a kind of production in which one parent produces offspring identical to herself. In ARO algorithm, an individual is shown by a vector of variables. Each variable is considered as a chromosome. A binary string represents a chromosome consisted of genes. It is supposed that every generated answer exists in the environment, and because of limited resources, only the best solution can remain alive. The algorithm starts with a random individual in the answer scope. This parent reproduces the offspring named bud. Either the parent or the offspring can survive. In this competition, the one which outperforms in fitness function remains alive. If the offspring has suitable performance, it will be the next parent, and the current parent becomes obsolete. Otherwise, the offspring perishes, and the present parent survives. The algorithm recurs until the stop condition occurs.

Findings

Results showed that ARO had increased the AUC (i.e. area under a receiver operating characteristic (ROC) curve), sensitivity, precision, specificity and accuracy by 13%, 25%, 56%, 3% and 3%, in comparison with AIS, respectively. The authors achieved a high precision value indicating that if ARO detects a record as a fraud, with a high probability, it is a fraud one. Supporting a real-time fraud detection system is another vital issue. ARO outperforms AIS not only in the mentioned criteria, but also decreases the training time by 75% in comparison with the AIS, which is a significant figure.

Originality/value

In this paper, the authors implemented the ARO in credit card fraud detection. The authors compared the results with those of the AIS, which was one of the best methods ever implemented on the benchmark dataset. The chief focus of the fraud detection studies is finding the algorithms that can detect legal transactions from the fraudulent ones with high detection accuracy in the shortest time and at a low cost. That ARO meets all these demands.

Article
Publication date: 22 January 2024

Dinesh Kumar and Nidhi Suthar

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal…

1184

Abstract

Purpose

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions.

Design/methodology/approach

The paper synthesises information from academic articles, industry reports, case studies and legal documents through a thematic literature review. A qualitative analysis approach categorises and interprets ethical and legal challenges and proposes potential solutions.

Findings

The findings of this paper raise concerns about ethical and legal challenges related to AI in the marketing area. Ethical concerns related to discrimination, bias, manipulation, job displacement, absence of social interaction, cybersecurity, unintended consequences, environmental impact, privacy and legal issues such as consumer security, responsibility, liability, brand protection, competition law, agreements, data protection, consumer protection and intellectual property rights are discussed in the paper, and their potential solutions are discussed.

Research limitations/implications

Notwithstanding the interesting insights gathered from this investigation of the ethical and legal consequences of AI in marketing, it is important to recognise the limits of this research. Initially, the focus of this study is confined to a review of the most important ethical and legal issues pertaining to AI in marketing. Additional possible repercussions, such as those associated with intellectual property, contracts and licencing, should be investigated more deeply in future studies. Despite the fact that this study gives various answers and best practices for tackling the stated ethical and legal concerns, the viability and efficacy of these solutions may differ depending on the context and industry. Thus, more research and case studies are required to evaluate the applicability and efficacy of these solutions in other circumstances. This research is mostly based on a literature review and may not represent the experiences or opinions of all stakeholders engaged in AI-powered marketing. Further study might involve interviews or surveys with marketing professionals, customers and other key stakeholders to offer a full knowledge of the practical difficulties and solutions. Because of the rapid speed of technical progress, AI’s ethical and regulatory ramifications in marketing are continually increasing. Consequently, this work should be a springboard for more research and continuing conversations on this subject.

Practical implications

This study’s findings have several practical implications for marketing professionals. Emphasising openness and explainability: Marketing professionals should prioritise transparency in their use of AI, ensuring that customers are fully informed about data collection and utilisation for targeted advertising. By promoting openness and explainability, marketers can foster customer trust and avoid the negative consequences of a lack of transparency. Establishing ethical guidelines: Marketing professionals need to develop ethical rules for the creation and implementation of AI-powered marketing strategies. Adhering to ethical principles ensures compliance with legal norms and aligns with the organisation’s values and ideals. Investing in bias detection tools and privacy-enhancing technology: To mitigate risks associated with AI in marketing, marketers should allocate resources to develop and implement bias detection tools and privacy-enhancing technology. These tools can identify and address biases in AI algorithms, safeguard consumer privacy and extract valuable insights from consumer data.

Social implications

This study’s social implications emphasise the need for a comprehensive approach to address the ethical and legal challenges of AI in marketing. This includes adopting a responsible innovation framework, promoting ethical leadership, using ethical decision-making frameworks and conducting multidisciplinary research. By incorporating these approaches, marketers can navigate the complexities of AI in marketing responsibly, foster an ethical organisational culture, make informed ethical decisions and develop effective solutions. Such practices promote public trust, ensure equitable distribution of benefits and risk, and mitigate potential negative social consequences associated with AI in marketing.

Originality/value

To the best of the authors’ knowledge, this paper is among the first to explore potential solutions comprehensively. This paper provides a nuanced understanding of the challenges by using a multidisciplinary framework and synthesising various sources. It contributes valuable insights for academia and industry.

Details

Journal of Information, Communication and Ethics in Society, vol. 22 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 10 May 2023

Marko Kureljusic and Erik Karger

Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current…

75891

Abstract

Purpose

Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current technological developments. Thus, artificial intelligence (AI) in financial accounting is often applied only in pilot projects. Using AI-based forecasts in accounting enables proactive management and detailed analysis. However, thus far, there is little knowledge about which prediction models have already been evaluated for accounting problems. Given this lack of research, our study aims to summarize existing findings on how AI is used for forecasting purposes in financial accounting. Therefore, the authors aim to provide a comprehensive overview and agenda for future researchers to gain more generalizable knowledge.

Design/methodology/approach

The authors identify existing research on AI-based forecasting in financial accounting by conducting a systematic literature review. For this purpose, the authors used Scopus and Web of Science as scientific databases. The data collection resulted in a final sample size of 47 studies. These studies were analyzed regarding their forecasting purpose, sample size, period and applied machine learning algorithms.

Findings

The authors identified three application areas and presented details regarding the accuracy and AI methods used. Our findings show that sociotechnical and generalizable knowledge is still missing. Therefore, the authors also develop an open research agenda that future researchers can address to enable the more frequent and efficient use of AI-based forecasts in financial accounting.

Research limitations/implications

Owing to the rapid development of AI algorithms, our results can only provide an overview of the current state of research. Therefore, it is likely that new AI algorithms will be applied, which have not yet been covered in existing research. However, interested researchers can use our findings and future research agenda to develop this field further.

Practical implications

Given the high relevance of AI in financial accounting, our results have several implications and potential benefits for practitioners. First, the authors provide an overview of AI algorithms used in different accounting use cases. Based on this overview, companies can evaluate the AI algorithms that are most suitable for their practical needs. Second, practitioners can use our results as a benchmark of what prediction accuracy is achievable and should strive for. Finally, our study identified several blind spots in the research, such as ensuring employee acceptance of machine learning algorithms in companies. However, companies should consider this to implement AI in financial accounting successfully.

Originality/value

To the best of our knowledge, no study has yet been conducted that provided a comprehensive overview of AI-based forecasting in financial accounting. Given the high potential of AI in accounting, the authors aimed to bridge this research gap. Moreover, our cross-application view provides general insights into the superiority of specific algorithms.

Details

Journal of Applied Accounting Research, vol. 25 no. 1
Type: Research Article
ISSN: 0967-5426

Keywords

Article
Publication date: 9 February 2023

Alberto Lopez and Ricardo Garza

Will consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how…

1124

Abstract

Purpose

Will consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how consumers feel about being evaluated by AI instead of by a human. Furthermore, why do consumers experience being evaluated by an AI algorithm or by a human differently? This research aims to offer answers to these questions.

Design/methodology/approach

Three laboratory experiments were conducted. Experiments 1 and 2 test the main effect of evaluator (AI and human) and evaluations received (positive, neutral and negative) on fairness perception of the evaluation. Experiment 3 replicates previous findings and tests the mediation effect.

Findings

Building on previous research on consumer biases and lack of transparency anxiety, the authors present converging evidence that consumers who got positive evaluations reported nonsignificant difference on the level of fairness perception on the evaluation regardless of the evaluator (human or AI). Contrarily, consumers who got negative evaluations reported lower fairness perception when the evaluation was given by AI. Further moderated mediation analysis showed that consumers who get a negative evaluation by AI experience higher levels of lack of transparency anxiety, which in turn is an underlying mechanism driving this effect.

Originality/value

To the best of the authors' knowledge, no previous research has investigated how consumers feel about being evaluated by AI instead of by a human. This consumer bias against AI evaluations is a phenomenon previously overlooked in the marketing literature, with many implications for the development and adoption of new AI products, as well as theoretical contributions to the nascent literature on consumer experience and AI.

Details

Journal of Research in Interactive Marketing, vol. 17 no. 6
Type: Research Article
ISSN: 2040-7122

Keywords

Open Access
Article
Publication date: 27 September 2022

Philip T. Roundy

Entrepreneurs are increasingly relying on artificial intelligence (AI) to assist in creating and scaling new ventures. Research on entrepreneurs’ use of AI algorithms (machine…

2821

Abstract

Purpose

Entrepreneurs are increasingly relying on artificial intelligence (AI) to assist in creating and scaling new ventures. Research on entrepreneurs’ use of AI algorithms (machine learning, natural language processing, artificial neural networks) has focused on the intra-organizational implications of AI. The purpose of this paper is to explore how entrepreneurs’ adoption of AI influences their inter- and meta-organizational relationships.

Design/methodology/approach

To address the limited understanding of the consequences of AI for communities of entrepreneurs, this paper develops a theory to explain how AI algorithms influence the micro (entrepreneur) and macro (system) dynamics of entrepreneurial ecosystems.

Findings

The theory’s main insight is that substituting AI for entrepreneurial ecosystem interactions influences not only entrepreneurs’ pursuit of opportunities but also the coordination of their local entrepreneurial ecosystems.

Originality/value

The theory contributes by drawing attention to the inter-organizational implications of AI, explaining how the decision to substitute AI for human interactions is a micro-foundation of ecosystems, and motivating a research agenda at the intersection of AI and entrepreneurial ecosystems.

Details

Journal of Ethics in Entrepreneurship and Technology, vol. 2 no. 1
Type: Research Article
ISSN: 2633-7436

Keywords

1 – 10 of over 7000