Search results

1 – 8 of 8
Executive summary
Publication date: 4 March 2024

HAITI: Gangs will maintain push for Henry’s departure

Details

DOI: 10.1108/OXAN-ES285618

ISSN: 2633-304X

Keywords

Geographic
Topical
Executive summary
Publication date: 10 January 2024

ECUADOR: Crime crisis presents major test for Noboa

Details

DOI: 10.1108/OXAN-ES284475

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 17 April 2024

Dirk H.R. Spennemann, Jessica Biles, Lachlan Brown, Matthew F. Ireland, Laura Longmore, Clare L. Singh, Anthony Wallis and Catherine Ward

The use of generative artificial intelligence (genAi) language models such as ChatGPT to write assignment text is well established. This paper aims to assess to what extent genAi…

Abstract

Purpose

The use of generative artificial intelligence (genAi) language models such as ChatGPT to write assignment text is well established. This paper aims to assess to what extent genAi can be used to obtain guidance on how to avoid detection when commissioning and submitting contract-written assignments and how workable the offered solutions are.

Design/methodology/approach

Although ChatGPT is programmed not to provide answers that are unethical or that may cause harm to people, ChatGPT’s can be prompted to answer with inverted moral valence, thereby supplying unethical answers. The authors tasked ChatGPT to generate 30 essays that discussed the benefits of submitting contract-written undergraduate assignments and outline the best ways of avoiding detection. The authors scored the likelihood that ChatGPT’s suggestions would be successful in avoiding detection by markers when submitting contract-written work.

Findings

While the majority of suggested strategies had a low chance of escaping detection, recommendations related to obscuring plagiarism and content blending as well as techniques related to distraction have a higher probability of remaining undetected. The authors conclude that ChatGPT can be used with success as a brainstorming tool to provide cheating advice, but that its success depends on the vigilance of the assignment markers and the cheating student’s ability to distinguish between genuinely viable options and those that appear to be workable but are not.

Originality/value

This paper is a novel application of making ChatGPT answer with inverted moral valence, simulating queries by students who may be intent on escaping detection when committing academic misconduct.

Details

Interactive Technology and Smart Education, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 16 April 2024

Amir Schreiber and Ilan Schreiber

In the modern digital realm, while artificial intelligence (AI) technologies pave the way for unprecedented opportunities, they also give rise to intricate cybersecurity issues…

Abstract

Purpose

In the modern digital realm, while artificial intelligence (AI) technologies pave the way for unprecedented opportunities, they also give rise to intricate cybersecurity issues, including threats like deepfakes and unanticipated AI-induced risks. This study aims to address the insufficient exploration of AI cybersecurity awareness in the current literature.

Design/methodology/approach

Using in-depth surveys across varied sectors (N = 150), the authors analyzed the correlation between the absence of AI risk content in organizational cybersecurity awareness programs and its impact on employee awareness.

Findings

A significant AI-risk knowledge void was observed among users: despite frequent interaction with AI tools, a majority remain unaware of specialized AI threats. A pronounced knowledge difference existed between those that are trained in AI risks and those who are not, more apparent among non-technical personnel and sectors managing sensitive information.

Research limitations/implications

This study paves the way for thorough research, allowing for refinement of awareness initiatives tailored to distinct industries.

Practical implications

It is imperative for organizations to emphasize AI risk training, especially among non-technical staff. Industries handling sensitive data should be at the forefront.

Social implications

Ensuring employees are aware of AI-related threats can lead to a safer digital environment for both organizations and society at large, given the pervasive nature of AI in everyday life.

Originality/value

Unlike most of the papers about AI risks, the authors do not trust subjective data from second hand papers, but use objective authentic data from the authors’ own up-to-date anonymous survey.

Details

Information & Computer Security, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 29 November 2022

Rajat Kumar Behera, Pradip Kumar Bala and Nripendra P. Rana

The new ways to complete financial transactions have been developed by setting up mobile payment (m-payment) platforms and such platforms to access banking in the financial…

1320

Abstract

Purpose

The new ways to complete financial transactions have been developed by setting up mobile payment (m-payment) platforms and such platforms to access banking in the financial mainstream can transact as never before. But, does m-payment have veiled consequences? To seek an answer, the research was undertaken to explore the dark sides of m-payment for consumers by extending the theory of innovation resistance (IR) and by measuring non-adoption intention (NAI).

Design/methodology/approach

Three hundred individuals using popular online m-payment apps such as Paytm, PhonePe, Amazon Pay and Google Pay were surveyed for the primary data. IBM AMOS based structural equation modelling (SEM) was used to analyse the data.

Findings

Each m-payment transaction leaves a digital record, making some vulnerable consumers concerned about privacy threats. Lack of global standards prevents consumers from participating in the m-payment system properly until common interfaces are established based on up-to-date standards. Self-compassion (SC) characteristics such as anxiety, efficacy, fatigue, wait-and-see tendencies and the excessive choice of technology effect contribute to the non-adoption of m-payment.

Originality/value

This study proposes a threat model and empirically explores the dark sides of m-payment. In addition, it also unveils the moderator's role of SC in building the structural relationship between IR and NAI.

Details

Information Technology & People, vol. 36 no. 7
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 8 March 2024

Sarah Jerasa and Sarah K. Burriss

Artificial intelligence (AI) has become increasingly important and influential in reading and writing. The influx of social media digital spaces, like TikTok, has also shifted the…

Abstract

Purpose

Artificial intelligence (AI) has become increasingly important and influential in reading and writing. The influx of social media digital spaces, like TikTok, has also shifted the ways multimodal composition takes place alongside AI. This study aims to argue that within spaces like TikTok, human composers must attend to the ways they write for, with and against the AI-powered algorithm.

Design/methodology/approach

Data collection was drawn from a larger study on #BookTok (the TikTok subcommunity for readers) that included semi-structured interviews including watching and reflecting on a TikTok they created. The authors grounded this study in critical posthumanist literacies to analyze and open code five #BookTok content creators’ interview transcripts. Using axial coding, authors collaboratively determined three overarching and entangled themes: writing for, with and against.

Findings

Findings highlight the nuanced ways #BookTokers consider the AI algorithm in their compositional choices, namely, in the ways how they want to disseminate their videos to a larger audience or more niche-focused community. Throughout the interviews, participants revealed how the AI algorithm was situated differently as both audience member, co-author and censor.

Originality/value

This study is grounded in critical posthumanist literacies and explores composition as a joint accomplishment between humans and machines. The authors argued that it is necessary to expand our human-centered notions of what it means to write for an audience, to co-author and to resist censorship or gatekeeping.

Details

English Teaching: Practice & Critique, vol. 23 no. 1
Type: Research Article
ISSN: 1175-8708

Keywords

Article
Publication date: 21 February 2024

Serhat Adem Sop and Doğa Kurçer

This study aims to explore whether Chat Generative Pre-training Transformer (ChatGPT) can produce quantitative data sets for researchers who could behave unethically through data…

Abstract

Purpose

This study aims to explore whether Chat Generative Pre-training Transformer (ChatGPT) can produce quantitative data sets for researchers who could behave unethically through data fabrication.

Design/methodology/approach

A two-stage case study related to the field of tourism was conducted, and ChatGPT (v.3.5.) was asked to respond to the first questionnaire on behalf of 400 participants and the second on behalf of 800 participants. The artificial intelligence (AI)-generated data sets’ quality was statistically tested via descriptive statistics, correlation analysis, exploratory factor analysis, confirmatory factor analysis and Harman's single-factor test.

Findings

The results revealed that ChatGPT could respond to the questionnaires as the number of participants at the desired sample size level and could present the generated data sets in a table format ready for analysis. It was also observed that ChatGPT's responses were systematical, and it created a statistically ideal data set. However, it was noted that the data produced high correlations among the observed variables, the measurement model did not achieve sufficient goodness of fit and the issue of common method bias emerged. The conclusion reached is that ChatGPT does not or cannot yet generate data of suitable quality for advanced-level statistical analyses.

Originality/value

This study shows that ChatGPT can provide quantitative data to researchers attempting to fabricate data sets unethically. Therefore, it offers a new and significant argument to the ongoing debates about the unethical use of ChatGPT. Besides, a quantitative data set generated by AI was statistically examined for the first time in this study. The results proved that the data produced by ChatGPT is problematic in certain aspects, shedding light on several points that journal editors should consider during the editorial processes.

研究目的

本研究旨在探讨ChatGPT是否能够为那些可能通过数据伪造行为不道德的研究人员生成定量数据集。

研究方法

本研究进行了与旅游领域相关的两阶段案例研究, 并要求ChatGPT(v.3.5.)代表400名参与者回答第一个问卷, 以及代表800名参与者回答第二个问卷。通过描述统计、相关分析、探索性因子分析、验证性因子分析和哈曼的单因素测试对人工智能生成的数据集的质量进行了统计测试。

研究发现

结果显示, ChatGPT能够按照所需的样本大小水平回答问卷, 并以表格格式呈现生成的数据集, 以便进行分析。还观察到ChatGPT的回答是系统性的, 并且它创建了一个在统计上理想的数据集。然而, 本研究注意到所产生的数据在观察变量之间存在较高的相关性, 测量模型未能达到足够的拟合度, 并出现了共同方法偏差的问题。本研究得出的结论是, ChatGPT目前不能生成适用于高级统计分析的数据, 或者说不适合这样做。

研究创新

本研究表明, ChatGPT可以为试图不道德地伪造数据集的研究人员提供定量数据。因此, 它为关于ChatGPT不道德使用的持续争论提供了一个新而重要的论点。此外, 在本研究中首次对由人工智能生成的定量数据集进行了统计检验。结果表明, ChatGPT生成的数据在某些方面存在问题, 为期刊编辑在编辑过程中考虑的几个要点提供了启示。

Open Access
Article
Publication date: 11 October 2023

Murali Chari

The purpose of this paper is to make the case that ethical guardrails in emerging technology businesses are inadequate and to develop solutions to strengthen these guardrails.

Abstract

Purpose

The purpose of this paper is to make the case that ethical guardrails in emerging technology businesses are inadequate and to develop solutions to strengthen these guardrails.

Design/methodology/approach

Based on literature and first principles reasoning, the paper develops theoretical arguments about the fundamental purpose of ethical guardrails and how they evolve and then uses this along with the characteristics that distinguish emerging technology businesses to identify inadequacies in the ethical guardrails for emerging technology businesses and develop solutions to strengthen the guardrails.

Findings

The paper shows that the ethical guardrails for emerging technology businesses are inadequate and that the reasons for this are systematic. The paper also develops actionable recommendations to strengthen these guardrails.

Originality/value

The paper develops the novel argument that reasons for the inadequate ethical guardrails in emerging technology businesses are systematic and stem from the inadequacy of laws and regulations, inadequacy of boards and the focus of business executives.

Details

Journal of Ethics in Entrepreneurship and Technology, vol. 3 no. 2
Type: Research Article
ISSN: 2633-7436

Keywords

1 – 8 of 8