Search results
1 – 2 of 2Derrick Boakye, David Sarpong, Dirk Meissner and George Ofosu
Cyber-attacks that generate technical disruptions in organisational operations and damage the reputation of organisations have become all too common in the contemporary…
Abstract
Purpose
Cyber-attacks that generate technical disruptions in organisational operations and damage the reputation of organisations have become all too common in the contemporary organisation. This paper explores the reputation repair strategies undertaken by organisations in the event of becoming victims of cyber-attacks.
Design/methodology/approach
For developing the authors’ contribution in the context of the Internet service providers' industry, the authors draw on a qualitative case study of TalkTalk, a British telecommunications company providing business to business (B2B) and business to customer (B2C) Internet services, which was a victim of a “significant and sustained” cyber-attack in October 2015. Data for the enquiry is sourced from publicly available archival documents such as newspaper articles, press releases, podcasts and parliamentary hearings on the TalkTalk cyber-attack.
Findings
The findings suggest a dynamic interplay of technical and rhetorical responses in dealing with cyber-attacks. This plays out in the form of marshalling communication and mortification techniques, bolstering image and riding on leader reputation, which serially combine to strategically orchestrate reputational repair and stigma erasure in the event of a cyber-attack.
Originality/value
Analysing a prototypical case of an organisation in dire straits following a cyber-attack, the paper provides a systematic characterisation of the setting-in-motion of strategic responses to manage, revamp and ameliorate damaged reputation during cyber-attacks, which tend to negatively shape the evaluative perceptions of the organisation's salient audience.
Details
Keywords
Lai-Wan Wong, Garry Wei-Han Tan, Keng-Boon Ooi and Yogesh Dwivedi
The deployment of artificial intelligence (AI) technologies in travel and tourism has received much attention in the wake of the pandemic. While societal adoption of AI has…
Abstract
Purpose
The deployment of artificial intelligence (AI) technologies in travel and tourism has received much attention in the wake of the pandemic. While societal adoption of AI has accelerated, it also raises some trust challenges. Literature on trust in AI is scant, especially regarding the vulnerabilities faced by different stakeholders to inform policy and practice. This work proposes a framework to understand the use of AI technologies from the perspectives of institutional and the self to understand the formation of trust in the mandated use of AI-based technologies in travelers.
Design/methodology/approach
An empirical investigation using partial least squares-structural equation modeling was employed on responses from 209 users. This paper considered factors related to the self (perceptions of self-threat, privacy empowerment, trust propensity) and institution (regulatory protection, corporate privacy responsibility) to understand the formation of trust in AI use for travelers.
Findings
Results showed that self-threat, trust propensity and regulatory protection influence trust in users on AI use. Privacy empowerment and corporate responsibility do not.
Originality/value
Insights from the past studies on AI in travel and tourism are limited. This study advances current literature on affordance and reactance theories to provide a better understanding of what makes travelers trust the mandated use of AI technologies. This work also demonstrates the paradoxical effects of self and institution on technologies and their relationship to trust. For practice, this study offers insights for enhancing adoption via developing trust.
Details