Search results

1 – 10 of 509
Article
Publication date: 23 September 2013

Andreas Kuehn

This article compares the use of deep packet inspection (DPI) technology to the use of cookies for online behavioral advertising (OBA), in the form of two competing paradigms. It

2815

Abstract

Purpose

This article compares the use of deep packet inspection (DPI) technology to the use of cookies for online behavioral advertising (OBA), in the form of two competing paradigms. It seeks to explain why DPI was eliminated as a viable option due to political and regulatory reactions whereas cookies technology was not, even though it raises some of the same privacy issues.

Design/methodology/approach

The paradigms draw from two-sided market theory to conceptualize OBA. Empirical case studies, NebuAd's DPI platform and Facebook's Beacon program, substantiate the paradigms with insights into the controversies on behavioral tracking between 2006 and 2009 in the USA. The case studies are based on document analyses and interviews.

Findings

Comparing the two cases from a technological, economic, and institutional perspective, the article argues that both paradigms were equally privacy intrusive. Thus, it rejects the generally held view that privacy issues can explain the outcome of the battle. Politics and regulatory legacy tilted the playing field towards the cookies paradigm, impeding a competing technology.

Originality/value

Shifting the narrative away from privacy to competing tracking paradigms and their specific actors sheds light on the political and the regulatory rationales that were not considered in previous research on OBA. Particularly, setting forth institutional aspects on OBA – and DPI in general – the case studies provide much needed empirical analysis to reassess tracking technologies and policy outcomes.

Details

info, vol. 15 no. 6
Type: Research Article
ISSN: 1463-6697

Keywords

Article
Publication date: 8 August 2016

Nan Zhang, Heikki Hämmäinen and Hannu Flinck

This paper models the cost efficiency of service function chaining (SFC) in software-defined LTE networks and compares it with traditional LTE networks.

Abstract

Purpose

This paper models the cost efficiency of service function chaining (SFC) in software-defined LTE networks and compares it with traditional LTE networks.

Design/methodology/approach

Both the capital expenditure (CAPEX) and operational expenditure (OPEX) of the SFC are quantified using an average Finnish mobile network in 2015 as a reference. The modeling inputs are gathered through semi-structured interviews with Finnish mobile network operators (MNO) and network infrastructure vendors operating in the Finnish market.

Findings

The modeling shows that software-defined networking (SDN) can reduce SFC-related CAPEX and OPEX significantly for an average Finnish MNO in 2015. The analysis on different types of MNOs implies that a MNO without deep packet inspection sees the biggest cost savings compared to other MNO types.

Practical implications

Service function investments typically amount to 5-20 per cent of the overall MNO network investments, and savings in SFC may impact highly on the cost structure of a MNO. In addition, SFC acts as both a business interface, which connects the local MNOs with global internet service providers, and as a technical interface, where the 3GPP and IETF standards meet. Thus, the cost efficient operation of SFC may bring competitive advantages to the MNO.

Originality/value

The results show solid basis of network-related cost savings in SFC and contributes to MNOs making cost conscious investment decisions. In addition, the results act as a baseline scenario for further studies that combine SDN with virtualization to re-optimize network service functions.

Details

info, vol. 18 no. 5
Type: Research Article
ISSN: 1463-6697

Keywords

Book part
Publication date: 7 May 2019

Emanuel Boussios

This chapter focuses on a critical issue in cyber intelligence in the United States (US) that concerns the engagement of state-owned or state-controlled entities with overseeing…

Abstract

This chapter focuses on a critical issue in cyber intelligence in the United States (US) that concerns the engagement of state-owned or state-controlled entities with overseeing citizen’s activity in cyberspace. The emphasis in the discussion is placed on the constitutionality of state actions and the shifting boundaries in which the state can act in the name of security to protect its people from the nation’s enemies. A second piece of this discussion is which state actors and agencies can control the mechanisms by which this sensitive cyber information is collected, stored, and if needed, acted upon. The most salient case with regard to this debate is that of Edward Snowden. It reveals the US government’s abuses of this surveillance machinery prompting major debates around the topics of privacy, national security, and mass digital surveillance. When observing the response to Snowden’s disclosures one can ask what point of view is being ignored, or what questions are not being answered. By considering the silence as a part of our everyday language we can improve our understanding of mediated discourses. Recommendations on cyber-intelligence reforms in response to Snowden’s revelations – and whether these are in fact practical in modern, high-technology societies such as the US – follow.

Details

Politics and Technology in the Post-Truth Era
Type: Book
ISBN: 978-1-78756-984-3

Keywords

Article
Publication date: 4 March 2014

Mark A. Harris and Karen P. Patten

This paper's purpose is to identify and accentuate the dilemma faced by small- to medium-sized enterprises (SMEs) who use mobile devices as part of their mobility business…

8561

Abstract

Purpose

This paper's purpose is to identify and accentuate the dilemma faced by small- to medium-sized enterprises (SMEs) who use mobile devices as part of their mobility business strategy. While large enterprises have the resources to implement emerging security recommendations for mobile devices, such as smartphones and tablets, SMEs often lack the IT resources and capabilities needed. The SME mobile device business dilemma is to invest in more expensive maximum security technologies, invest in less expensive minimum security technologies with increased risk, or postpone the business mobility strategy in order to protect enterprise and customer data and information. This paper investigates mobile device security and the implications of security recommendations for SMEs.

Design/methodology/approach

This conceptual paper reviews mobile device security research, identifies increased security risks, and recommends security practices for SMEs.

Findings

This paper identifies emerging mobile device security risks and provides a set of minimum mobile device security recommendations practical for SMEs. However, SMEs would still have increased security risks versus large enterprises who can implement maximum mobile device security recommendations. SMEs are faced with a dilemma: embrace the mobility business strategy and adopt and invest in the necessary security technology, implement minimum precautions with increased risk, or give up their mobility business strategy.

Practical implications

This paper develops a practical list of minimum mobile device security recommendations for SMEs. It also increases the awareness of potential security risks for SMEs from mobile devices.

Originality/value

This paper expands previous research investigating SME adoption of computers, broadband internet-based services, and Wi-Fi by adding mobile devices. It describes the SME competitive advantages from adopting mobile devices for enterprise business mobility, while accentuating the increased business risks and implications for SMEs.

Details

Information Management & Computer Security, vol. 22 no. 1
Type: Research Article
ISSN: 0968-5227

Keywords

Content available
Book part
Publication date: 31 July 2023

Michael Nizich

Abstract

Details

The Cybersecurity Workforce of Tomorrow
Type: Book
ISBN: 978-1-80382-918-0

Article
Publication date: 30 March 2023

Wilson Charles Chanhemo, Mustafa H. Mohsini, Mohamedi M. Mjahidi and Florence U. Rashidi

This study explores challenges facing the applicability of deep learning (DL) in software-defined networks (SDN) based campus networks. The study intensively explains the…

Abstract

Purpose

This study explores challenges facing the applicability of deep learning (DL) in software-defined networks (SDN) based campus networks. The study intensively explains the automation problem that exists in traditional campus networks and how SDN and DL can provide mitigating solutions. It further highlights some challenges which need to be addressed in order to successfully implement SDN and DL in campus networks to make them better than traditional networks.

Design/methodology/approach

The study uses a systematic literature review. Studies on DL relevant to campus networks have been presented for different use cases. Their limitations are given out for further research.

Findings

Following the analysis of the selected studies, it showed that the availability of specific training datasets for campus networks, SDN and DL interfacing and integration in production networks are key issues that must be addressed to successfully deploy DL in SDN-enabled campus networks.

Originality/value

This study reports on challenges associated with implementation of SDN and DL models in campus networks. It contributes towards further thinking and architecting of proposed SDN-based DL solutions for campus networks. It highlights that single problem-based solutions are harder to implement and unlikely to be adopted in production networks.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 June 2012

Teodor Sommestad, Hannes Holm and Mathias Ekstedt

The purpose of this paper is to identify the importance of the factors that influence the success rate of remote arbitrary code execution attacks. In other words, attacks which…

Abstract

Purpose

The purpose of this paper is to identify the importance of the factors that influence the success rate of remote arbitrary code execution attacks. In other words, attacks which use software vulnerabilities to execute the attacker's own code on targeted machines. Both attacks against servers and attacks against clients are studied.

Design/methodology/approach

The success rates of attacks are assessed for 24 scenarios: 16 scenarios for server‐side attacks and eight for client‐side attacks. The assessment is made through domain experts and is synthesized using Cooke's classical method, an established method for weighting experts' judgments. The variables included in the study were selected based on the literature, a pilot study, and interviews with domain experts.

Findings

Depending on the scenario in question, the expected success rate varies between 15 and 67 percent for server‐side attacks and between 43 and 67 percent for client‐side attacks. Based on these scenarios, the influence of different protective measures is identified.

Practical implications

The results of this study offer guidance to decision makers on how to best secure their assets against remote code execution attacks. These results also indicate the overall risk posed by this type of attack.

Originality/value

Attacks that use software vulnerabilities to execute code on targeted machines are common and pose a serious risk to most enterprises. However, there are no quantitative data on how difficult such attacks are to execute or on how effective security measures are against them. The paper provides such data using a structured technique to combine expert judgments.

Article
Publication date: 10 April 2017

Raman Singh, Harish Kumar, Ravinder Kumar Singla and Ramachandran Ramkumar Ketti

The paper addresses various cyber threats and their effects on the internet. A review of the literature on intrusion detection systems (IDSs) as a means of mitigating internet…

2465

Abstract

Purpose

The paper addresses various cyber threats and their effects on the internet. A review of the literature on intrusion detection systems (IDSs) as a means of mitigating internet attacks is presented, and gaps in the research are identified. The purpose of this paper is to identify the limitations of the current research and presents future directions for intrusion/malware detection research.

Design/methodology/approach

The paper presents a review of the research literature on IDSs, prior to identifying research gaps and limitations and suggesting future directions.

Findings

The popularity of the internet makes it vulnerable against various cyber-attacks. Ongoing research on intrusion detection methods aims to overcome the limitations of earlier approaches to internet security. However, findings from the literature review indicate a number of different limitations of existing techniques: poor accuracy, high detection time, and low flexibility in detecting zero-day attacks.

Originality/value

This paper provides a review of major issues in intrusion detection approaches. On the basis of a systematic and detailed review of the literature, various research limitations are discovered. Clear and concise directions for future research are provided.

Details

Online Information Review, vol. 41 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

Open Access
Article
Publication date: 19 May 2022

Akhilesh S Thyagaturu, Giang Nguyen, Bhaskar Prasad Rimal and Martin Reisslein

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long…

1039

Abstract

Purpose

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long latencies that hinder modern low-latency applications. In order to flexibly support the computing demands of users, cloud computing is evolving toward a continuum of cloud computing resources that are distributed between the end users and a distant data center. The purpose of this review paper is to concisely summarize the state-of-the-art in the evolving cloud computing field and to outline research imperatives.

Design/methodology/approach

The authors identify two main dimensions (or axes) of development of cloud computing: the trend toward flexibility of scaling computing resources, which the authors denote as Flex-Cloud, and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud. Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems.

Findings

The authors find that extensive research and development efforts have addressed some Ubi-Cloud and Flex-Cloud challenges resulting in exciting advances to date. However, a wide array of research challenges remains open, thus providing a fertile field for future research and development.

Originality/value

This review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Abstract

Details

The Cybersecurity Workforce of Tomorrow
Type: Book
ISBN: 978-1-80382-918-0

1 – 10 of 509