Search results

1 – 10 of 10
Article
Publication date: 15 March 2024

Salman Majeed and Woo Gon Kim

To influence consumer pre-purchase decision-making processes, such as brand selection and perceived brand experience, brands are interested in adopting hyperconnected…

Abstract

Purpose

To influence consumer pre-purchase decision-making processes, such as brand selection and perceived brand experience, brands are interested in adopting hyperconnected technological stimuli, such as artificial intelligence, augmented reality (AR), virtual reality, social media and tech devices. However, the understanding of different hyperconnected touchpoints remained shallow and results mixed in previous literature, despite the fact that these touchpoints span different technological interfaces/devices and may influence consumer brand selection. This paper aims to solidify the conceptual underpinnings of the role of online hyperconnected stimuli, which may influence consumer psychological reactions in terms of brand selection and experience.

Design/methodology/approach

This paper is conceptual and presents a discussion based on extant literature from various international publishers.

Findings

The authors revealed different technological stimuli in the online hyperconnected environment that may influence consumer online hyperconnected brand selection (OHBS), perceived online hyperconnected brand experience (OHBE), perceived well-being and behavioral intention.

Originality/value

The conceptual understanding of OHBS and perceived OHBE was mixed and inconsistent in previous studies. This paper brings together extant literature to establish the conceptual understanding of antecedents and outcomes of OHBS, i.e. perceived OHBE, perceived well-being and behavioral intention, and presents a cohesive conceptual framework.

Details

Journal of Consumer Marketing, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0736-3761

Keywords

Article
Publication date: 5 October 2023

Oke Hendra, Benny Kurnianto and Ika Endrawijaya

This study aimed to develop an adapted collaborative governance model for aviation human resource development in Indonesia's approved training organisations (ATO), considering the…

Abstract

Purpose

This study aimed to develop an adapted collaborative governance model for aviation human resource development in Indonesia's approved training organisations (ATO), considering the expected changes in the industry due to advanced technologies. The model, based on Ansell and Gash's approach, emphasizes multi-stakeholder collaboration to ensure workforce development aligns with industry and regulatory standards and accommodates technological advancements.

Design/methodology/approach

Qualitative methods, such as in-depth interviews and focus group discussions, were employed to collect and analyse data.

Findings

The results indicated that collaborative governance is a valuable tool for cultivating competent human resources and facilitating industry improvement in the face of rapid technological change.

Originality/value

The proposed model contributes significantly to the field by promoting inclusive and effective human resource development through the Centre for Aviation Human Resource Development (CAHRD), thereby preparing the Indonesian aviation industry for the impact of advanced technologies. Furthermore, this study contributes to the enhancement of Ansell and Gash's collaborative governance theoretical framework by effectively addressing its empirical gaps concerning vocational education and training challenges within Indonesia's air transportation sector.

Details

Higher Education, Skills and Work-Based Learning, vol. 14 no. 2
Type: Research Article
ISSN: 2042-3896

Keywords

Article
Publication date: 25 April 2024

Gökhan Yılmaz and Ayşe Şahin-Yılmaz

Artificial intelligence is one of the most significant and active fields of study in the last few years. Artificial intelligence-derived robotic technologies known as chatbots are…

Abstract

Purpose

Artificial intelligence is one of the most significant and active fields of study in the last few years. Artificial intelligence-derived robotic technologies known as chatbots are gaining interest from both academic and industry sectors. By analyzing the development and patterns of research on the chatbot phenomena within the tourism field, this study seeks to develop a theoretical framework for the interaction between chatbots and tourism.

Design/methodology/approach

The Web of Science (WoS) database’s 33 articles on chatbots related to travel and hospitality were examined between 2019 and 2024 using VOSviewer software for bibliometric and thematic content analysis.

Findings

Research on chatbots for tourism and hospitality appears to be in its early stages. The factors influencing tourists' intentions to use chatbots have been thoroughly researched; the attitudes, perceptions and behavioral intentions of destinations, travel agencies and restaurant patrons regarding chatbots were examined, and it was found that the quantitative research approach was dominant. In addition, the majority of the studies are based on a particular theory or model.

Originality/value

This is one of the first attempts to directly comprehend and depict the interconnected structures of studies on the interaction between chatbots and tourism through the use of network analysis. Furthermore, the study’s findings can offer academics a comprehensive viewpoint and a reference manual for more accurate assessment and oversight of the chatbot-tourism interaction. Regarding the lack of research on the topic and the fragmented structure of the studies that exist, it is imperative to provide both a comprehensive overview and a roadmap for future investigations into the usage of chatbots in the travel and hospitality sector.

Details

Worldwide Hospitality and Tourism Themes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1755-4217

Keywords

Article
Publication date: 11 May 2023

Shivangi Verma and Naval Garg

With the growth and profound influence of technology on our life, it is important to address the ethical issues inherent to the development and deployment of technology…

Abstract

Purpose

With the growth and profound influence of technology on our life, it is important to address the ethical issues inherent to the development and deployment of technology. Researchers and practitioners submit the need to inspect: how technology and ethics interact, how ethical principles regulate technology and what could be the probable future course of action to execute techno-ethical practices in a socio-technical discourse effectively. To address the thoughts related to techno-ethics, the authors of the present study conducted exploratory research to understand the trend and relevance of technology ethics since its inception.

Design/methodology/approach

The study collected over 679 documents for the period 1990–2022 from the Scopus database. A quantitative approach of bibliometric analysis was conducted to study the pattern of authorship, publications, citations, prominent journals and contributors in the subject area. VOS viewer software was utilized to visualize and map academic performance in techno-ethics.

Findings

The findings revealed that the concept of techno-ethics is an emerging field and requires more investigation to harness its relevance with everchanging technology development. The data revealed substantial growth in the field of techno-ethics in humanities, social science and management domain in the last two decades. Also, most of the prominent cited references and documents in the database tend to cover the theme of Artificial Intelligence, Big data, computer ethics, morality, decision-making, IT ethics, human rights, responsibility and privacy.

Originality/value

The article provides a comprehensive overview of scientific production and main research trends in techno-ethics until 2022. The study is a pioneer in expanding the academic productivity and performance of embedding ethics in technology.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Open Access
Article
Publication date: 27 June 2023

Teemu Birkstedt, Matti Minkkinen, Anushree Tandon and Matti Mäntymäki

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical…

7126

Abstract

Purpose

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical principles to practice through AI governance (AIG). AIG has emerged as a rapidly growing, yet fragmented, research area. This paper synthesizes the organizational AIG literature by outlining research themes and knowledge gaps as well as putting forward future agendas.

Design/methodology/approach

The authors undertake a systematic literature review on AIG, addressing the current state of its conceptualization and suggesting future directions for AIG scholarship and practice. The review protocol was developed following recommended guidelines for systematic reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).

Findings

The results of the authors’ review confirmed the assumption that AIG is an emerging research topic with few explicit definitions. Moreover, the authors’ review identified four themes in the AIG literature: technology, stakeholders and context, regulation and processes. The central knowledge gaps revealed were the limited understanding of AIG implementation, lack of attention to the AIG context, uncertain effectiveness of ethical principles and regulation, and insufficient operationalization of AIG processes. To address these gaps, the authors present four future AIG agendas: technical, stakeholder and contextual, regulatory, and process. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Research limitations/implications

To address the identified knowledge gaps, the authors present the following working definition of AIG: AI governance is a system of rules, practices and processes employed to ensure an organization's use of AI technologies aligns with its strategies, objectives, and values, complete with legal requirements, ethical principles and the requirements set by stakeholders. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Practical implications

For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Social implications

For society, the authors review elucidates the multitude of stakeholders involved in AI governance activities and complexities related to balancing the needs of different stakeholders.

Originality/value

By delineating the AIG concept and the associated research themes, knowledge gaps and future agendas, the authors review builds a foundation for organizational AIG research, calling for broad contextual investigations and a deep understanding of AIG mechanisms. For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Details

Internet Research, vol. 33 no. 7
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 26 September 2023

Yongchao Martin Ma, Xin Dai and Zhongzhun Deng

The purpose of this study is to investigate consumers' emotional responses to artificial intelligence (AI) defeating people. Meanwhile, the authors investigate the negative…

Abstract

Purpose

The purpose of this study is to investigate consumers' emotional responses to artificial intelligence (AI) defeating people. Meanwhile, the authors investigate the negative spillover effect of AI defeating people on consumers' attitudes toward AI companies. The authors also try to alleviate this spillover effect.

Design/methodology/approach

Using four studies to test the hypotheses. In Study 1, the authors use the fine-tuned Bidirectional Encoder Representations from the Transformers algorithm to run a sentiment analysis to investigate how AI defeating people influences consumers' emotions. In Studies 2 to 4, the authors test the effect of AI defeating people on consumers' attitudes, the mediating effect of negative emotions and the moderating effect of different intentions.

Findings

The authors find that AI defeating people increases consumers' negative emotions. In terms of downstream consequences, AI defeating people induces a spillover effect on consumers' unfavorable attitudes toward AI companies. Emphasizing the intention of helping people can effectively mitigate this negative spillover effect.

Practical implications

The authors' findings remind governments, policymakers and AI companies to pay attention to the negative effect of AI defeating people and take reasonable steps to alleviate this negative effect. The authors help consumers rationally understand this phenomenon and correctly control and reduce unnecessary negative emotions in the AI era.

Originality/value

This paper is the first study to examine the adverse effects of AI defeating humans. The authors contribute to research on the dark side of AI, the outcomes of competition matches and the method to analyze emotions in user-generated content (UGC).

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

Open Access
Article
Publication date: 27 September 2023

Myrthe Blösser and Andrea Weihrauch

In spite of the merits of artificial intelligence (AI) in marketing and social media, harm to consumers has prompted calls for AI auditing/certification. Understanding consumers’…

2187

Abstract

Purpose

In spite of the merits of artificial intelligence (AI) in marketing and social media, harm to consumers has prompted calls for AI auditing/certification. Understanding consumers’ approval of AI certification entities is vital for its effectiveness and companies’ choice of certification. This study aims to generate important insights into the consumer perspective of AI certifications and stimulate future research.

Design/methodology/approach

A literature and status-quo-driven search of the AI certification landscape identifies entities and related concepts. This study empirically explores consumer approval of the most discussed entities in four AI decision domains using an online experiment and outline a research agenda for AI certification in marketing/social media.

Findings

Trust in AI certification is complex. The empirical findings show that consumers seem to approve more of non-profit entities than for-profit entities, with the government approving the most.

Research limitations/implications

The introduction of AI certification to marketing/social media contributes to work on consumer trust and AI acceptance and structures AI certification research from outside marketing to facilitate future research on AI certification for marketing/social media scholars.

Practical implications

For businesses, the authors provide a first insight into consumer preferences for AI-certifying entities, guiding the choice of which entity to use. For policymakers, this work guides their ongoing discussion on “who should certify AI” from a consumer perspective.

Originality/value

To the best of the authors’ knowledge, this work is the first to introduce the topic of AI certification to the marketing/social media literature, provide a novel guideline to scholars and offer the first set of empirical studies examining consumer approval of AI certifications.

Details

European Journal of Marketing, vol. 58 no. 2
Type: Research Article
ISSN: 0309-0566

Keywords

Open Access
Article
Publication date: 4 April 2024

Bassem T. ElHassan and Alya A. Arabi

The purpose of this paper is to illuminate the ethical concerns associated with the use of artificial intelligence (AI) in the medical sector and to provide solutions that allow…

Abstract

Purpose

The purpose of this paper is to illuminate the ethical concerns associated with the use of artificial intelligence (AI) in the medical sector and to provide solutions that allow deriving maximum benefits from this technology without compromising ethical principles.

Design/methodology/approach

This paper provides a comprehensive overview of AI in medicine, exploring its technical capabilities, practical applications, and ethical implications. Based on our expertise, we offer insights from both technical and practical perspectives.

Findings

The study identifies several advantages of AI in medicine, including its ability to improve diagnostic accuracy, enhance surgical outcomes, and optimize healthcare delivery. However, there are pending ethical issues such as algorithmic bias, lack of transparency, data privacy issues, and the potential for AI to deskill healthcare professionals and erode humanistic values in patient care. Therefore, it is important to address these issues as promptly as possible to make sure that we benefit from the AI’s implementation without causing any serious drawbacks.

Originality/value

This paper gains its value from the combined practical experience of Professor Elhassan gained through his practice at top hospitals worldwide, and the theoretical expertise of Dr. Arabi acquired from international institutes. The shared experiences of the authors provide valuable insights that are beneficial for raising awareness and guiding action in addressing the ethical concerns associated with the integration of artificial intelligence in medicine.

Details

International Journal of Ethics and Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9369

Keywords

Article
Publication date: 18 January 2024

Yelena Smirnova and Victoriano Travieso-Morales

The general data protection regulation (GDPR) was designed to address privacy challenges posed by globalisation and rapid technological advancements; however, its implementation…

Abstract

Purpose

The general data protection regulation (GDPR) was designed to address privacy challenges posed by globalisation and rapid technological advancements; however, its implementation has also introduced new hurdles for companies. This study aims to analyse and synthesise the existing literature that focuses on challenges of GDPR implementation in business enterprises, while also outlining the directions for future research.

Design/methodology/approach

The methodology of this review follows the preferred reporting items for systematic reviews and meta-analysis guidelines. It uses an extensive search strategy across Scopus and Web of Science databases, rigorously applying inclusion and exclusion criteria, yielding a detailed analysis of 16 selected studies that concentrate on GDPR implementation challenges in business organisations.

Findings

The findings indicate a predominant use of conceptual study methodologies in prior research, often limited to specific countries and technology-driven sectors. There is also an inclination towards exploring GDPR challenges within small and medium enterprises, while larger enterprises remain comparatively unexplored. Additionally, further investigation is needed to understand the implications of emerging technologies on GDPR compliance.

Research limitations/implications

This study’s limitations include reliance of the search strategy on two databases, potential exclusion of relevant research, limited existing literature on GDPR implementation challenges in business context and possible influence of diverse methodologies and contexts of previous studies on generalisability of the findings.

Originality/value

The originality of this review lies in its exclusive focus on analysing GDPR implementation challenges within the business context, coupled with a fresh categorisation of these challenges into technical, legal, organisational, and regulatory dimensions.

Details

International Journal of Law and Management, vol. 66 no. 3
Type: Research Article
ISSN: 1754-243X

Keywords

Open Access
Article
Publication date: 24 May 2023

Bakhtiar Sadeghi, Deborah Richards, Paul Formosa, Mitchell McEwan, Muhammad Hassan Ali Bajwa, Michael Hitchens and Malcolm Ryan

Cybersecurity vulnerabilities are often due to human users acting according to their own ethical priorities. With the goal of providing tailored training to cybersecurity…

1551

Abstract

Purpose

Cybersecurity vulnerabilities are often due to human users acting according to their own ethical priorities. With the goal of providing tailored training to cybersecurity professionals, the authors conducted a study to uncover profiles of human factors that influence which ethical principles are valued highest following exposure to ethical dilemmas presented in a cybersecurity game.

Design/methodology/approach

The authors’ game first sensitises players (cybersecurity trainees) to five cybersecurity ethical principles (beneficence, non-maleficence, justice, autonomy and explicability) and then allows the player to explore their application in multiple cybersecurity scenarios. After playing the game, players rank the five ethical principles in terms of importance. A total of 250 first-year cybersecurity students played the game. To develop profiles, the authors collected players' demographics, knowledge about ethics, personality, moral stance and values.

Findings

The authors built models to predict the importance of each of the five ethical principles. The analyses show that, generally, the main driver influencing the priority given to specific ethical principles is cultural background, followed by the personality traits of extraversion and conscientiousness. The importance of the ingroup was also a prominent factor.

Originality/value

Cybersecurity professionals need to understand the impact of users' ethical choices. To provide ethics training, the profiles uncovered will be used to build artificially intelligent (AI) non-player characters (NPCs) to expose the player to multiple viewpoints. The NPCs will adapt their training according to the predicted players’ viewpoint.

Details

Organizational Cybersecurity Journal: Practice, Process and People, vol. 3 no. 2
Type: Research Article
ISSN: 2635-0270

Keywords

1 – 10 of 10