Search results

1 – 10 of over 1000
Article
Publication date: 8 March 2024

Sarah Jerasa and Sarah K. Burriss

Artificial intelligence (AI) has become increasingly important and influential in reading and writing. The influx of social media digital spaces, like TikTok, has also shifted the…

Abstract

Purpose

Artificial intelligence (AI) has become increasingly important and influential in reading and writing. The influx of social media digital spaces, like TikTok, has also shifted the ways multimodal composition takes place alongside AI. This study aims to argue that within spaces like TikTok, human composers must attend to the ways they write for, with and against the AI-powered algorithm.

Design/methodology/approach

Data collection was drawn from a larger study on #BookTok (the TikTok subcommunity for readers) that included semi-structured interviews including watching and reflecting on a TikTok they created. The authors grounded this study in critical posthumanist literacies to analyze and open code five #BookTok content creators’ interview transcripts. Using axial coding, authors collaboratively determined three overarching and entangled themes: writing for, with and against.

Findings

Findings highlight the nuanced ways #BookTokers consider the AI algorithm in their compositional choices, namely, in the ways how they want to disseminate their videos to a larger audience or more niche-focused community. Throughout the interviews, participants revealed how the AI algorithm was situated differently as both audience member, co-author and censor.

Originality/value

This study is grounded in critical posthumanist literacies and explores composition as a joint accomplishment between humans and machines. The authors argued that it is necessary to expand our human-centered notions of what it means to write for an audience, to co-author and to resist censorship or gatekeeping.

Details

English Teaching: Practice & Critique, vol. 23 no. 1
Type: Research Article
ISSN: 1175-8708

Keywords

Content available
Article
Publication date: 14 March 2023

Paula Hall and Debbie Ellis

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has…

3547

Abstract

Purpose

Gender bias in artificial intelligence (AI) should be solved as a priority before AI algorithms become ubiquitous, perpetuating and accentuating the bias. While the problem has been identified as an established research and policy agenda, a cohesive review of existing research specifically addressing gender bias from a socio-technical viewpoint is lacking. Thus, the purpose of this study is to determine the social causes and consequences of, and proposed solutions to, gender bias in AI algorithms.

Design/methodology/approach

A comprehensive systematic review followed established protocols to ensure accurate and verifiable identification of suitable articles. The process revealed 177 articles in the socio-technical framework, with 64 articles selected for in-depth analysis.

Findings

Most previous research has focused on technical rather than social causes, consequences and solutions to AI bias. From a social perspective, gender bias in AI algorithms can be attributed equally to algorithmic design and training datasets. Social consequences are wide-ranging, with amplification of existing bias the most common at 28%. Social solutions were concentrated on algorithmic design, specifically improving diversity in AI development teams (30%), increasing awareness (23%), human-in-the-loop (23%) and integrating ethics into the design process (21%).

Originality/value

This systematic review is the first of its kind to focus on gender bias in AI algorithms from a social perspective within a socio-technical framework. Identification of key causes and consequences of bias and the breakdown of potential solutions provides direction for future research and policy within the growing field of AI ethics.

Peer review

The peer review history for this article is available at https://publons.com/publon/10.1108/OIR-08-2021-0452

Details

Online Information Review, vol. 47 no. 7
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 22 January 2024

Dinesh Kumar and Nidhi Suthar

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal…

1325

Abstract

Purpose

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions.

Design/methodology/approach

The paper synthesises information from academic articles, industry reports, case studies and legal documents through a thematic literature review. A qualitative analysis approach categorises and interprets ethical and legal challenges and proposes potential solutions.

Findings

The findings of this paper raise concerns about ethical and legal challenges related to AI in the marketing area. Ethical concerns related to discrimination, bias, manipulation, job displacement, absence of social interaction, cybersecurity, unintended consequences, environmental impact, privacy and legal issues such as consumer security, responsibility, liability, brand protection, competition law, agreements, data protection, consumer protection and intellectual property rights are discussed in the paper, and their potential solutions are discussed.

Research limitations/implications

Notwithstanding the interesting insights gathered from this investigation of the ethical and legal consequences of AI in marketing, it is important to recognise the limits of this research. Initially, the focus of this study is confined to a review of the most important ethical and legal issues pertaining to AI in marketing. Additional possible repercussions, such as those associated with intellectual property, contracts and licencing, should be investigated more deeply in future studies. Despite the fact that this study gives various answers and best practices for tackling the stated ethical and legal concerns, the viability and efficacy of these solutions may differ depending on the context and industry. Thus, more research and case studies are required to evaluate the applicability and efficacy of these solutions in other circumstances. This research is mostly based on a literature review and may not represent the experiences or opinions of all stakeholders engaged in AI-powered marketing. Further study might involve interviews or surveys with marketing professionals, customers and other key stakeholders to offer a full knowledge of the practical difficulties and solutions. Because of the rapid speed of technical progress, AI’s ethical and regulatory ramifications in marketing are continually increasing. Consequently, this work should be a springboard for more research and continuing conversations on this subject.

Practical implications

This study’s findings have several practical implications for marketing professionals. Emphasising openness and explainability: Marketing professionals should prioritise transparency in their use of AI, ensuring that customers are fully informed about data collection and utilisation for targeted advertising. By promoting openness and explainability, marketers can foster customer trust and avoid the negative consequences of a lack of transparency. Establishing ethical guidelines: Marketing professionals need to develop ethical rules for the creation and implementation of AI-powered marketing strategies. Adhering to ethical principles ensures compliance with legal norms and aligns with the organisation’s values and ideals. Investing in bias detection tools and privacy-enhancing technology: To mitigate risks associated with AI in marketing, marketers should allocate resources to develop and implement bias detection tools and privacy-enhancing technology. These tools can identify and address biases in AI algorithms, safeguard consumer privacy and extract valuable insights from consumer data.

Social implications

This study’s social implications emphasise the need for a comprehensive approach to address the ethical and legal challenges of AI in marketing. This includes adopting a responsible innovation framework, promoting ethical leadership, using ethical decision-making frameworks and conducting multidisciplinary research. By incorporating these approaches, marketers can navigate the complexities of AI in marketing responsibly, foster an ethical organisational culture, make informed ethical decisions and develop effective solutions. Such practices promote public trust, ensure equitable distribution of benefits and risk, and mitigate potential negative social consequences associated with AI in marketing.

Originality/value

To the best of the authors’ knowledge, this paper is among the first to explore potential solutions comprehensively. This paper provides a nuanced understanding of the challenges by using a multidisciplinary framework and synthesising various sources. It contributes valuable insights for academia and industry.

Details

Journal of Information, Communication and Ethics in Society, vol. 22 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 10 May 2023

Marko Kureljusic and Erik Karger

Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current…

76351

Abstract

Purpose

Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current technological developments. Thus, artificial intelligence (AI) in financial accounting is often applied only in pilot projects. Using AI-based forecasts in accounting enables proactive management and detailed analysis. However, thus far, there is little knowledge about which prediction models have already been evaluated for accounting problems. Given this lack of research, our study aims to summarize existing findings on how AI is used for forecasting purposes in financial accounting. Therefore, the authors aim to provide a comprehensive overview and agenda for future researchers to gain more generalizable knowledge.

Design/methodology/approach

The authors identify existing research on AI-based forecasting in financial accounting by conducting a systematic literature review. For this purpose, the authors used Scopus and Web of Science as scientific databases. The data collection resulted in a final sample size of 47 studies. These studies were analyzed regarding their forecasting purpose, sample size, period and applied machine learning algorithms.

Findings

The authors identified three application areas and presented details regarding the accuracy and AI methods used. Our findings show that sociotechnical and generalizable knowledge is still missing. Therefore, the authors also develop an open research agenda that future researchers can address to enable the more frequent and efficient use of AI-based forecasts in financial accounting.

Research limitations/implications

Owing to the rapid development of AI algorithms, our results can only provide an overview of the current state of research. Therefore, it is likely that new AI algorithms will be applied, which have not yet been covered in existing research. However, interested researchers can use our findings and future research agenda to develop this field further.

Practical implications

Given the high relevance of AI in financial accounting, our results have several implications and potential benefits for practitioners. First, the authors provide an overview of AI algorithms used in different accounting use cases. Based on this overview, companies can evaluate the AI algorithms that are most suitable for their practical needs. Second, practitioners can use our results as a benchmark of what prediction accuracy is achievable and should strive for. Finally, our study identified several blind spots in the research, such as ensuring employee acceptance of machine learning algorithms in companies. However, companies should consider this to implement AI in financial accounting successfully.

Originality/value

To the best of our knowledge, no study has yet been conducted that provided a comprehensive overview of AI-based forecasting in financial accounting. Given the high potential of AI in accounting, the authors aimed to bridge this research gap. Moreover, our cross-application view provides general insights into the superiority of specific algorithms.

Details

Journal of Applied Accounting Research, vol. 25 no. 1
Type: Research Article
ISSN: 0967-5426

Keywords

Article
Publication date: 9 February 2023

Alberto Lopez and Ricardo Garza

Will consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how…

1145

Abstract

Purpose

Will consumers accept artificial intelligence (AI) products that evaluate them? New consumer products offer AI evaluations. However, previous research has never investigated how consumers feel about being evaluated by AI instead of by a human. Furthermore, why do consumers experience being evaluated by an AI algorithm or by a human differently? This research aims to offer answers to these questions.

Design/methodology/approach

Three laboratory experiments were conducted. Experiments 1 and 2 test the main effect of evaluator (AI and human) and evaluations received (positive, neutral and negative) on fairness perception of the evaluation. Experiment 3 replicates previous findings and tests the mediation effect.

Findings

Building on previous research on consumer biases and lack of transparency anxiety, the authors present converging evidence that consumers who got positive evaluations reported nonsignificant difference on the level of fairness perception on the evaluation regardless of the evaluator (human or AI). Contrarily, consumers who got negative evaluations reported lower fairness perception when the evaluation was given by AI. Further moderated mediation analysis showed that consumers who get a negative evaluation by AI experience higher levels of lack of transparency anxiety, which in turn is an underlying mechanism driving this effect.

Originality/value

To the best of the authors' knowledge, no previous research has investigated how consumers feel about being evaluated by AI instead of by a human. This consumer bias against AI evaluations is a phenomenon previously overlooked in the marketing literature, with many implications for the development and adoption of new AI products, as well as theoretical contributions to the nascent literature on consumer experience and AI.

Details

Journal of Research in Interactive Marketing, vol. 17 no. 6
Type: Research Article
ISSN: 2040-7122

Keywords

Article
Publication date: 12 February 2024

Hamid Reza Saeidnia, Elaheh Hosseini, Shadi Abdoli and Marcel Ausloos

The study aims to analyze the synergy of artificial intelligence (AI), with scientometrics, webometrics and bibliometrics to unlock and to emphasize the potential of the…

Abstract

Purpose

The study aims to analyze the synergy of artificial intelligence (AI), with scientometrics, webometrics and bibliometrics to unlock and to emphasize the potential of the applications and benefits of AI algorithms in these fields.

Design/methodology/approach

By conducting a systematic literature review, our aim is to explore the potential of AI in revolutionizing the methods used to measure and analyze scholarly communication, identify emerging research trends and evaluate the impact of scientific publications. To achieve this, we implemented a comprehensive search strategy across reputable databases such as ProQuest, IEEE Explore, EBSCO, Web of Science and Scopus. Our search encompassed articles published from January 1, 2000, to September 2022, resulting in a thorough review of 61 relevant articles.

Findings

(1) Regarding scientometrics, the application of AI yields various distinct advantages, such as conducting analyses of publications, citations, research impact prediction, collaboration, research trend analysis and knowledge mapping, in a more objective and reliable framework. (2) In terms of webometrics, AI algorithms are able to enhance web crawling and data collection, web link analysis, web content analysis, social media analysis, web impact analysis and recommender systems. (3) Moreover, automation of data collection, analysis of citations, disambiguation of authors, analysis of co-authorship networks, assessment of research impact, text mining and recommender systems are considered as the potential of AI integration in the field of bibliometrics.

Originality/value

This study covers the particularly new benefits and potential of AI-enhanced scientometrics, webometrics and bibliometrics to highlight the significant prospects of the synergy of this integration through AI.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Book part
Publication date: 14 December 2023

Carol Azungi Dralega

In the current post-human society, artificial intelligence (AI) and algorithms are rapidly being deployed in newsrooms around the world to enhance processes of news idea…

Abstract

In the current post-human society, artificial intelligence (AI) and algorithms are rapidly being deployed in newsrooms around the world to enhance processes of news idea conception, newsgathering, writing, packaging and dissemination. Although AI adaptation has been ongoing especially in Western Newsrooms over the last decade, this process is only budding in sub-Saharan newsroom contexts. This study explores perceptions, use, prospects and challenges in the adaptation of AI and algorithms in newsrooms. This qualitative survey draws insights from 33 respondents from newspapers, radio stations, online media and community media in Uganda, Tanzania, Rwanda and Ethiopia. The study found varied levels of AI adoption in several newsrooms with some newsrooms not yet using AI while others were fully experimenting with a variety of tools, functionalities – even producing their own AI tools and also in change employment patterns to accommodate the skills needed within this new field. In some of the ‘inactive AI newsrooms’ individual journalists took the onus on themselves to learn and use the disruptive technologies and while the general attitudes towards AI were positive among journalists, the attitudes among management was generally considered poor. The study concludes for the benefits to be maximally leveraged, several of the bottlenecks in application must be addressed. These include the integration of ‘humans-in the loop’, journalistic principles, decolonial and local contextual perspectives in AI development and use. Such perspectives and synergies would need to be drawn from media ecosystems – including journalism education, research, policy, industry and developers.

Details

Digitisation, AI and Algorithms in African Journalism and Media Contexts
Type: Book
ISBN: 978-1-80455-135-6

Keywords

Open Access
Article
Publication date: 17 November 2023

Mika Ruokonen and Paavo Ritala

The purpose of this paper is to identify the potential and the challenges for different firms in adopting an AI-first strategy. The study attempts to discern if any company can…

2398

Abstract

Purpose

The purpose of this paper is to identify the potential and the challenges for different firms in adopting an AI-first strategy. The study attempts to discern if any company can prioritize AI at the forefront of their strategic plans.

Design/methodology/approach

Drawing from illustrative examples from well-known AI-leaders like Netflix and Spotify, as well as from upcoming AI startups and industry incumbents, the paper explores the strategic role of AI in core business processes and customer value creation. It also discusses the advent and implications of generative AI tools since late 2022 to firms’ business strategies.

Findings

The authors identify three types of AI-first strategies, depending on firms’ starting points: digital tycoon, niche carver and asset augmenter. The authors discuss how each strategy can aim to achieve data, algorithmic and execution advantages, and what the strategic bottlenecks and risks are within each strategy.

Originality/value

To the best of the authors’ knowledge, this paper is the first to systematically describe how companies can form “AI-first” strategies from different starting points. This study includes actionable examples from known industry players to more emerging startups and industrial incumbents.

Details

Journal of Business Strategy, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0275-6668

Keywords

Article
Publication date: 15 July 2021

Nehemia Sugianto, Dian Tjondronegoro, Rosemary Stockdale and Elizabeth Irenne Yuwono

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Abstract

Purpose

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Design/methodology/approach

The paper proposes a new Responsible Artificial Intelligence Implementation Framework to guide the proposed solution's design and development. It defines responsible artificial intelligence criteria that the solution needs to meet and provides checklists to enforce the criteria throughout the process. To preserve data privacy, the proposed system incorporates a federated learning approach to allow computation performed on edge devices to limit sensitive and identifiable data movement and eliminate the dependency of cloud computing at a central server.

Findings

The proposed system is evaluated through a case study of monitoring social distancing at an airport. The results discuss how the system can fully address the case study's requirements in terms of its reliability, its usefulness when deployed to the airport's cameras, and its compliance with responsible artificial intelligence.

Originality/value

The paper makes three contributions. First, it proposes a real-time social distancing breach detection system on edge that extends from a combination of cutting-edge people detection and tracking algorithms to achieve robust performance. Second, it proposes a design approach to develop responsible artificial intelligence in video surveillance contexts. Third, it presents results and discussion from a comprehensive evaluation in the context of a case study at an airport to demonstrate the proposed system's robust performance and practical usefulness.

Details

Information Technology & People, vol. 37 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 29 February 2024

Donghee Shin, Kulsawasd Jitkajornwanich, Joon Soo Lim and Anastasia Spyridou

This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a…

Abstract

Purpose

This study examined how people assess health information from AI and improve their diagnostic ability to identify health misinformation. The proposed model was designed to test a cognitive heuristic theory in misinformation discernment.

Design/methodology/approach

We proposed the heuristic-systematic model to assess health misinformation processing in the algorithmic context. Using the Analysis of Moment Structure (AMOS) 26 software, we tested fairness/transparency/accountability (FAccT) as constructs that influence the heuristic evaluation and systematic discernment of misinformation by users. To test moderating and mediating effects, PROCESS Macro Model 4 was used.

Findings

The effect of AI-generated misinformation on people’s perceptions of the veracity of health information may differ according to whether they process misinformation heuristically or systematically. Heuristic processing is significantly associated with the diagnosticity of misinformation. There is a greater chance that misinformation will be correctly diagnosed and checked, if misinformation aligns with users’ heuristics or is validated by the diagnosticity they perceive.

Research limitations/implications

When exposed to misinformation through algorithmic recommendations, users’ perceived diagnosticity of misinformation can be predicted accurately from their understanding of normative values. This perceived diagnosticity would then positively influence the accuracy and credibility of the misinformation.

Practical implications

Perceived diagnosticity exerts a key role in fostering misinformation literacy, implying that improving people’s perceptions of misinformation and AI features is an efficient way to change their misinformation behavior.

Social implications

Although there is broad agreement on the need to control and combat health misinformation, the magnitude of this problem remains unknown. It is essential to understand both users’ cognitive processes when it comes to identifying health misinformation and the diffusion mechanism from which such misinformation is framed and subsequently spread.

Originality/value

The mechanisms through which users process and spread misinformation have remained open-ended questions. This study provides theoretical insights and relevant recommendations that can make users and firms/institutions alike more resilient in protecting themselves from the detrimental impact of misinformation.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-04-2023-0167

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of over 1000