Search results

1 – 10 of over 3000
Article
Publication date: 11 October 2023

Karen M. DSouza and Aaron M. French

Purveyors of fake news perpetuate information that can harm society, including businesses. Social media's reach quickly amplifies distortions of fake news. Research has not yet…

Abstract

Purpose

Purveyors of fake news perpetuate information that can harm society, including businesses. Social media's reach quickly amplifies distortions of fake news. Research has not yet fully explored the mechanisms of such adversarial behavior or the adversarial techniques of machine learning that might be deployed to detect fake news. Debiasing techniques are also explored to combat against the generation of fake news using adversarial data. The purpose of this paper is to present the challenges and opportunities in fake news detection.

Design/methodology/approach

First, this paper provides an overview of adversarial behaviors and current machine learning techniques. Next, it describes the use of long short-term memory (LSTM) to identify fake news in a corpus of articles. Finally, it presents the novel adversarial behavior approach to protect targeted business datasets from attacks.

Findings

This research highlights the need for a corpus of fake news that can be used to evaluate classification methods. Adversarial debiasing using IBM's Artificial Intelligence Fairness 360 (AIF360) toolkit can improve the disparate impact of unfavorable characteristics of a dataset. Debiasing also demonstrates significant potential to reduce fake news generation based on the inherent bias in the data. These findings provide avenues for further research on adversarial collaboration and robust information systems.

Originality/value

Adversarial debiasing of datasets demonstrates that by reducing bias related to protected attributes, such as sex, race and age, businesses can reduce the potential of exploitation to generate fake news through adversarial data.

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 5 July 2018

Yiyi Fan and Mark Stevenson

This paper aims to investigate how supply chain risks can be identified in both collaborative and adversarial buyer–supplier relationships (BSRs).

2516

Abstract

Purpose

This paper aims to investigate how supply chain risks can be identified in both collaborative and adversarial buyer–supplier relationships (BSRs).

Design/methodology/approach

This research includes a multiple-case study involving ten Chinese manufacturers with two informants per organisation. Data have been interpreted from a multi-level social capital perspective (i.e. from both an individual and organisational level), supplemented by signalling theory.

Findings

Buyers use different risk identification strategies or apply the same strategy in different ways according to the BSR type. The impact of organisational social capital on risk identification is contingent upon the degree to which individual social capital is deployed in a way that benefits an individual’s own agenda versus that of the organisation. Signalling theory generally complements social capital theory and helps further understand how buyers can identify risks, especially in adversarial BSRs, e.g. by using indirect signals from suppliers or other supply chain actors to “read between the lines” and anticipate risks.

Research limitations/implications

Data collection is focussed on China and is from the buyer side only. Future research could explore other contexts and include the supplier perspective.

Practical implications

The types of relationships that are developed by buyers with their supply chain partners at an organisational and an individual level have implications for risk exposure and how risks can be identified. The multi-level analysis highlights how strategies such as employee rotation and retention can be deployed to support risk identification.

Originality/value

Much of the extant literature on supply chain risk management is focussed on risk mitigation, whereas risk identification is under-represented. A unique case-based insight is provided into risk identification in different types of BSRs by using a multi-level social capital approach complemented by signalling theory.

Details

Supply Chain Management: An International Journal, vol. 23 no. 4
Type: Research Article
ISSN: 1359-8546

Keywords

Article
Publication date: 21 July 2020

Mohammad Moradi and Qi Li

Over the past decade, many research works in various disciplines have benefited from the endless ocean of people and their potentials (in the form of crowdsourcing) as an…

Abstract

Purpose

Over the past decade, many research works in various disciplines have benefited from the endless ocean of people and their potentials (in the form of crowdsourcing) as an effective problem-solving strategy and computational model. But nothing interesting is ever completely one-sided. Therefore, when it comes to leveraging people's power, as the dark side of crowdsourcing, there are some possible threats that have not been considered as should be, such as recruiting black hat crowdworkers for organizing targeted adversarial intentions. The purpose of this paper is to draw more attention to this critical issue through investigation of its different aspects.

Design/methodology/approach

To delve into details of such malicious intentions, the related literature and previous researches have been studied. Then, four major typologies for adversarial crowdsourced attacks as well as some real-world scenarios are discussed and delineated. Finally, possible future threats are introduced.

Findings

Despite many works on adversarial crowdsourcing, there are only a few specific research studies devoted to considering the issue in the context of cyber security. In this regard, the proposed typologies (and addressed scenarios) for such human-mediated attacks can shed light on the way of identifying and confronting such threats.

Originality/value

To the best of the authors' knowledge, this the first work in which the titular topic is investigated in detail. Due to popularity and efficiency of leveraging crowds' intelligence and efforts in a wide range of application domains, it is most likely that adversarial human-driven intentions gain more attention. In this regard, it is anticipated that the present research study can serve as a roadmap for proposing defensive mechanisms to cope with such diverse threats.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 3 May 2016

Lindsay Meredith

The purpose of this paper is to introduce a template to guide practitioners in the creation of multiple marketing plans that are intended to target different groups of…

5012

Abstract

Purpose

The purpose of this paper is to introduce a template to guide practitioners in the creation of multiple marketing plans that are intended to target different groups of stakeholders – some of whom are supportive, others adversarial, namely, the business-to-business (B2B) marketer’s agenda.

Design/methodology/approach

The methodology involved a combination of purposeful sampling, real-time participatory observation, action research and secondary data analysis. The main method of this research is analytical and conceptual with the objective of identifying the diverse groups of stakeholders with whom business marketers must interact.

Findings

In cases where multiple marketing plans were used for different stakeholder groups, B2B firms encountered lower levels of negative attribution from social network systems, mass media and subsequently public and governmental stakeholders.

Originality/value

This paper suggests the need for multiple marketing plans that target not only supportive customers but also neutral and adversarial stakeholders who represent a source of negative attribution because they have the potential to derail or even destroy the B2B firm’s marketing agenda. It is suggested that practitioners must also address those stakeholders who distrust or even dislike their firm and its marketing objectives.

Details

Journal of Business & Industrial Marketing, vol. 31 no. 4
Type: Research Article
ISSN: 0885-8624

Keywords

Article
Publication date: 4 January 2013

Vasilios Katos, Frank Stowell and Peter Bednar

The purpose of this paper is to develop an approach for investigating the impact of surveillance technologies used to facilitate security and its effect upon privacy.

Abstract

Purpose

The purpose of this paper is to develop an approach for investigating the impact of surveillance technologies used to facilitate security and its effect upon privacy.

Design/methodology/approach

The authors develop a methodology by drawing on an isomorphy of concepts from the discipline of Macroeconomics. This proposal is achieved by considering security and privacy as economic goods, where surveillance is seen as security technologies serving identity (ID) management and privacy is considered as being supported by ID assurance solutions.

Findings

Reflecting upon Ashby's Law of Requisite Variety, the authors conclude that surveillance policies will not meet espoused ends and investigate an alternative strategy for policy making.

Practical implications

The result of this exercise suggests that the proposed methodology could be a valuable tool for decision making at a strategic and aggregate level.

Originality/value

The paper extends the current literature on economics of privacy by incorporating methods from macroeconomics.

Article
Publication date: 7 December 2021

Yue Wang and Sai Ho Chung

This study is a systematic literature review of the application of artificial intelligence (AI) in safety-critical systems. The authors aim to present the current application…

1303

Abstract

Purpose

This study is a systematic literature review of the application of artificial intelligence (AI) in safety-critical systems. The authors aim to present the current application status according to different AI techniques and propose some research directions and insights to promote its wider application.

Design/methodology/approach

A total of 92 articles were selected for this review through a systematic literature review along with a thematic analysis.

Findings

The literature is divided into three themes: interpretable method, explain model behavior and reinforcement of safe learning. Among AI techniques, the most widely used are Bayesian networks (BNs) and deep neural networks. In addition, given the huge potential in this field, four future research directions were also proposed.

Practical implications

This study is of vital interest to industry practitioners and regulators in safety-critical domain, as it provided a clear picture of the current status and pointed out that some AI techniques have great application potential. For those that are inherently appropriate for use in safety-critical systems, regulators can conduct in-depth studies to validate and encourage their use in the industry.

Originality/value

This is the first review of the application of AI in safety-critical systems in the literature. It marks the first step toward advancing AI in safety-critical domain. The paper has potential values to promote the use of the term “safety-critical” and to improve the phenomenon of literature fragmentation.

Details

Industrial Management & Data Systems, vol. 122 no. 2
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 23 November 2023

Konstantinos Kalodanis, Panagiotis Rizomiliotis and Dimosthenis Anagnostopoulos

The purpose of this paper is to highlight the key technical challenges that derive from the recently proposed European Artificial Intelligence Act and specifically, to investigate…

Abstract

Purpose

The purpose of this paper is to highlight the key technical challenges that derive from the recently proposed European Artificial Intelligence Act and specifically, to investigate the applicability of the requirements that the AI Act mandates to high-risk AI systems from the perspective of AI security.

Design/methodology/approach

This paper presents the main points of the proposed AI Act, with emphasis on the compliance requirements of high-risk systems. It matches known AI security threats with the relevant technical requirements, it demonstrates the impact that these security threats can have to the AI Act technical requirements and evaluates the applicability of these requirements based on the effectiveness of the existing security protection measures. Finally, the paper highlights the necessity for an integrated framework for AI system evaluation.

Findings

The findings of the EU AI Act technical assessment highlight the gap between the proposed requirements and the available AI security countermeasures as well as the necessity for an AI security evaluation framework.

Originality/value

AI Act, high-risk AI systems, security threats, security countermeasures.

Details

Information & Computer Security, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 6 December 2021

Danny Murguia, Peter Demian and Robby Soetanto

The current understanding of building information modelling (BIM) adoption often neglects the industry context in which BIM is deployed. This is particularly problematic when…

Abstract

Purpose

The current understanding of building information modelling (BIM) adoption often neglects the industry context in which BIM is deployed. This is particularly problematic when policymakers are planning to enact top-down policies to promote BIM adoption in public-funded construction. Therefore, the aim of this study is to establish the industry-level factors that constraint or enable actors' intention to adopt BIM.

Design/methodology/approach

Using institutional theory with an emphasis on the cultural-cognitive elements, the authors aim to complement the understanding of BIM adoption by incorporating institutional elements into the unified theory of acceptance and use of technology (UTAUT). The cultural-cognitive elements were extracted from focus groups and interviews with architecture, construction and engineering (AEC) professionals in Peru. A modified UTAUT was empirically tested using confirmatory factor analysis (CFA) and structural equation modelling (SEM) with a dataset from 171 questionnaire responses.

Findings

The industry characteristics, standardisation, affordability and technology/methodology definition of BIM were found to be the cultural-cognitive elements having direct effects on individual reactions to BIM. These findings suggest that BIM adoption policies should focus on designing incentives schemes, training/educating professionals on BIM collaborative processes and developing/adapting applicable standards. However, a BIM adoption mandate would require policymakers to create collaborative procurement environments in tandem with information management and process standards.

Practical implications

Findings can be used by policymakers to significantly promote BIM adoption in contexts without a government mandate for public sector construction.

Originality/value

The study of institutional elements on BIM adoption is still limited. This study provides empirical evidence on how the cultural-cognitive elements of the industry context are associated with actors' intention to adopt BIM. Therefore, this study bridges industry and individual levels of analysis. Furthermore, this study enables policymakers to initiate actions that significantly encourage BIM adoption.

Details

Engineering, Construction and Architectural Management, vol. 30 no. 3
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 1 March 1989

One of Sir Michael Edwards' cardinal rules of good management is that no organisation can be successful if it tolerates internal politics. Manifestly one of the features of the UK…

Abstract

One of Sir Michael Edwards' cardinal rules of good management is that no organisation can be successful if it tolerates internal politics. Manifestly one of the features of the UK National Health Service is that it is riven with internal politics. The tendency to indulge in internal political activity arises from the desire of individuals, departments and professional groups to protect and expand their own empires, spheres of influence and level of activity. It is said that this is a natural human activity and this may be so, but nevertheless if it is harmful to the organisation it is possible to temper and if necessary control it without, as so often happens, shedding blood in the process.

Details

Journal of Management in Medicine, vol. 4 no. 3
Type: Research Article
ISSN: 0268-9235

Article
Publication date: 20 May 2019

Anastassia Lauterbach

This paper aims to inform policymakers about key artificial intelligence (AI) technologies, risks and trends in national AI strategies. It suggests a framework of social…

4341

Abstract

Purpose

This paper aims to inform policymakers about key artificial intelligence (AI) technologies, risks and trends in national AI strategies. It suggests a framework of social governance to ensure emergence of safe and beneficial AI.

Design/methodology/approach

The paper is based on approximately 100 interviews with researchers, executives of traditional companies and startups and policymakers in seven countries. The interviews were carried out in January-August 2017.

Findings

Policymakers still need to develop an informed, scientifically grounded and forward-looking view on what societies and businesses might expect from AI. There is lack of transparency on what key AI risks are and what might be regulatory approaches to handle them. There is no collaborative framework in place involving all important actors to decide on AI technology design principles and governance. Today's technology decisions will have long-term consequences on lives of billions of people and competitiveness of millions of businesses.

Research limitations/implications

The research did not include a lot of insights from the emerging markets.

Practical implications

Policymakers will understand the scope of most important AI concepts, risks and national strategies.

Social implications

AI is progressing at a very fast rate, changing industries, businesses and approaches how companies learn, generate business insights, design products and communicate with their employees and customers. It has a big societal impact, as – if not designed with care – it can scale human bias, increase cybersecurity risk and lead to negative shifts in employment. Like no other invention, it can tighten control by the few over the many, spread false information and propaganda and therewith shape the perception of people, communities and enterprises.

Originality/value

This paper is a compendium on the most important concepts of AI, bringing clarity into discussions around AI risks and the ways to mitigate them. The breadth of topics is valuable to policymakers, students, practitioners, general executives and board directors alike.

Details

Digital Policy, Regulation and Governance, vol. 21 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

1 – 10 of over 3000