Search results

1 – 10 of 471
To view the access options for this content please click here
Article
Publication date: 23 September 2013

Andreas Kuehn

This article compares the use of deep packet inspection (DPI) technology to the use of cookies for online behavioral advertising (OBA), in the form of two competing

Abstract

Purpose

This article compares the use of deep packet inspection (DPI) technology to the use of cookies for online behavioral advertising (OBA), in the form of two competing paradigms. It seeks to explain why DPI was eliminated as a viable option due to political and regulatory reactions whereas cookies technology was not, even though it raises some of the same privacy issues.

Design/methodology/approach

The paradigms draw from two-sided market theory to conceptualize OBA. Empirical case studies, NebuAd's DPI platform and Facebook's Beacon program, substantiate the paradigms with insights into the controversies on behavioral tracking between 2006 and 2009 in the USA. The case studies are based on document analyses and interviews.

Findings

Comparing the two cases from a technological, economic, and institutional perspective, the article argues that both paradigms were equally privacy intrusive. Thus, it rejects the generally held view that privacy issues can explain the outcome of the battle. Politics and regulatory legacy tilted the playing field towards the cookies paradigm, impeding a competing technology.

Originality/value

Shifting the narrative away from privacy to competing tracking paradigms and their specific actors sheds light on the political and the regulatory rationales that were not considered in previous research on OBA. Particularly, setting forth institutional aspects on OBA – and DPI in general – the case studies provide much needed empirical analysis to reassess tracking technologies and policy outcomes.

Details

info, vol. 15 no. 6
Type: Research Article
ISSN: 1463-6697

Keywords

To view the access options for this content please click here
Article
Publication date: 8 August 2016

Nan Zhang, Heikki Hämmäinen and Hannu Flinck

This paper models the cost efficiency of service function chaining (SFC) in software-defined LTE networks and compares it with traditional LTE networks.

Abstract

Purpose

This paper models the cost efficiency of service function chaining (SFC) in software-defined LTE networks and compares it with traditional LTE networks.

Design/methodology/approach

Both the capital expenditure (CAPEX) and operational expenditure (OPEX) of the SFC are quantified using an average Finnish mobile network in 2015 as a reference. The modeling inputs are gathered through semi-structured interviews with Finnish mobile network operators (MNO) and network infrastructure vendors operating in the Finnish market.

Findings

The modeling shows that software-defined networking (SDN) can reduce SFC-related CAPEX and OPEX significantly for an average Finnish MNO in 2015. The analysis on different types of MNOs implies that a MNO without deep packet inspection sees the biggest cost savings compared to other MNO types.

Practical implications

Service function investments typically amount to 5-20 per cent of the overall MNO network investments, and savings in SFC may impact highly on the cost structure of a MNO. In addition, SFC acts as both a business interface, which connects the local MNOs with global internet service providers, and as a technical interface, where the 3GPP and IETF standards meet. Thus, the cost efficient operation of SFC may bring competitive advantages to the MNO.

Originality/value

The results show solid basis of network-related cost savings in SFC and contributes to MNOs making cost conscious investment decisions. In addition, the results act as a baseline scenario for further studies that combine SDN with virtualization to re-optimize network service functions.

Details

info, vol. 18 no. 5
Type: Research Article
ISSN: 1463-6697

Keywords

To view the access options for this content please click here
Book part
Publication date: 7 May 2019

Emanuel Boussios

This chapter focuses on a critical issue in cyber intelligence in the United States (US) that concerns the engagement of state-owned or state-controlled entities with…

Abstract

This chapter focuses on a critical issue in cyber intelligence in the United States (US) that concerns the engagement of state-owned or state-controlled entities with overseeing citizen’s activity in cyberspace. The emphasis in the discussion is placed on the constitutionality of state actions and the shifting boundaries in which the state can act in the name of security to protect its people from the nation’s enemies. A second piece of this discussion is which state actors and agencies can control the mechanisms by which this sensitive cyber information is collected, stored, and if needed, acted upon. The most salient case with regard to this debate is that of Edward Snowden. It reveals the US government’s abuses of this surveillance machinery prompting major debates around the topics of privacy, national security, and mass digital surveillance. When observing the response to Snowden’s disclosures one can ask what point of view is being ignored, or what questions are not being answered. By considering the silence as a part of our everyday language we can improve our understanding of mediated discourses. Recommendations on cyber-intelligence reforms in response to Snowden’s revelations – and whether these are in fact practical in modern, high-technology societies such as the US – follow.

Details

Politics and Technology in the Post-Truth Era
Type: Book
ISBN: 978-1-78756-984-3

Keywords

To view the access options for this content please click here
Article
Publication date: 4 March 2014

Mark A. Harris and Karen P. Patten

This paper's purpose is to identify and accentuate the dilemma faced by small- to medium-sized enterprises (SMEs) who use mobile devices as part of their mobility business…

Abstract

Purpose

This paper's purpose is to identify and accentuate the dilemma faced by small- to medium-sized enterprises (SMEs) who use mobile devices as part of their mobility business strategy. While large enterprises have the resources to implement emerging security recommendations for mobile devices, such as smartphones and tablets, SMEs often lack the IT resources and capabilities needed. The SME mobile device business dilemma is to invest in more expensive maximum security technologies, invest in less expensive minimum security technologies with increased risk, or postpone the business mobility strategy in order to protect enterprise and customer data and information. This paper investigates mobile device security and the implications of security recommendations for SMEs.

Design/methodology/approach

This conceptual paper reviews mobile device security research, identifies increased security risks, and recommends security practices for SMEs.

Findings

This paper identifies emerging mobile device security risks and provides a set of minimum mobile device security recommendations practical for SMEs. However, SMEs would still have increased security risks versus large enterprises who can implement maximum mobile device security recommendations. SMEs are faced with a dilemma: embrace the mobility business strategy and adopt and invest in the necessary security technology, implement minimum precautions with increased risk, or give up their mobility business strategy.

Practical implications

This paper develops a practical list of minimum mobile device security recommendations for SMEs. It also increases the awareness of potential security risks for SMEs from mobile devices.

Originality/value

This paper expands previous research investigating SME adoption of computers, broadband internet-based services, and Wi-Fi by adding mobile devices. It describes the SME competitive advantages from adopting mobile devices for enterprise business mobility, while accentuating the increased business risks and implications for SMEs.

Details

Information Management & Computer Security, vol. 22 no. 1
Type: Research Article
ISSN: 0968-5227

Keywords

To view the access options for this content please click here
Article
Publication date: 1 June 2012

Teodor Sommestad, Hannes Holm and Mathias Ekstedt

The purpose of this paper is to identify the importance of the factors that influence the success rate of remote arbitrary code execution attacks. In other words, attacks…

Abstract

Purpose

The purpose of this paper is to identify the importance of the factors that influence the success rate of remote arbitrary code execution attacks. In other words, attacks which use software vulnerabilities to execute the attacker's own code on targeted machines. Both attacks against servers and attacks against clients are studied.

Design/methodology/approach

The success rates of attacks are assessed for 24 scenarios: 16 scenarios for server‐side attacks and eight for client‐side attacks. The assessment is made through domain experts and is synthesized using Cooke's classical method, an established method for weighting experts' judgments. The variables included in the study were selected based on the literature, a pilot study, and interviews with domain experts.

Findings

Depending on the scenario in question, the expected success rate varies between 15 and 67 percent for server‐side attacks and between 43 and 67 percent for client‐side attacks. Based on these scenarios, the influence of different protective measures is identified.

Practical implications

The results of this study offer guidance to decision makers on how to best secure their assets against remote code execution attacks. These results also indicate the overall risk posed by this type of attack.

Originality/value

Attacks that use software vulnerabilities to execute code on targeted machines are common and pose a serious risk to most enterprises. However, there are no quantitative data on how difficult such attacks are to execute or on how effective security measures are against them. The paper provides such data using a structured technique to combine expert judgments.

To view the access options for this content please click here
Article
Publication date: 10 April 2017

Raman Singh, Harish Kumar, Ravinder Kumar Singla and Ramachandran Ramkumar Ketti

The paper addresses various cyber threats and their effects on the internet. A review of the literature on intrusion detection systems (IDSs) as a means of mitigating…

Abstract

Purpose

The paper addresses various cyber threats and their effects on the internet. A review of the literature on intrusion detection systems (IDSs) as a means of mitigating internet attacks is presented, and gaps in the research are identified. The purpose of this paper is to identify the limitations of the current research and presents future directions for intrusion/malware detection research.

Design/methodology/approach

The paper presents a review of the research literature on IDSs, prior to identifying research gaps and limitations and suggesting future directions.

Findings

The popularity of the internet makes it vulnerable against various cyber-attacks. Ongoing research on intrusion detection methods aims to overcome the limitations of earlier approaches to internet security. However, findings from the literature review indicate a number of different limitations of existing techniques: poor accuracy, high detection time, and low flexibility in detecting zero-day attacks.

Originality/value

This paper provides a review of major issues in intrusion detection approaches. On the basis of a systematic and detailed review of the literature, various research limitations are discovered. Clear and concise directions for future research are provided.

Details

Online Information Review, vol. 41 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 25 November 2013

Wu He

As mobile malware and virus are rapidly increasing in frequency and sophistication, mobile social media has recently become a very popular attack vector. The purpose of…

Abstract

Purpose

As mobile malware and virus are rapidly increasing in frequency and sophistication, mobile social media has recently become a very popular attack vector. The purpose of this paper is to survey the state-of-the-art of security aspect of mobile social media, identify recent trends, and provide recommendations for researchers and practitioners in this fast moving field.

Design/methodology/approach

This paper reviews disparate discussions in literature on security aspect of mobile social media though blog mining and an extensive literature search. Based on the detailed review, the author summarizes some key insights to help enterprises understand security risks associated with mobile social media.

Findings

Risks related to mobile social media are identified based on the results of the review. Best practices and useful tips are offered to help enterprises mitigate risks of mobile social media. This paper also provides insights and guidance for enterprises to mitigate the security risks of mobile social media.

Originality/value

The paper consolidates the fragmented discussion in literature and provides an in-depth review to help researchers understand the latest development of security risks associated with mobile social media.

Details

Information Management & Computer Security, vol. 21 no. 5
Type: Research Article
ISSN: 0968-5227

Keywords

To view the access options for this content please click here
Article
Publication date: 14 October 2019

S. Velliangiri

The service denial threats are regularly regarded as tools for effortlessly triggering online-based services offline. Moreover, the present occurrences reveal that these…

Abstract

Purpose

The service denial threats are regularly regarded as tools for effortlessly triggering online-based services offline. Moreover, the present occurrences reveal that these threats are being constantly employed for masking other vulnerable threats like disseminating malware, information losses, wire scams and mining bitcoins (Sujithra et al., 2018; Boujnouni and Jedra, 2018). In some cases, service denials have been employed to cyberheist financial firms which sums around $100,000. Documentation from Neustar accounts that is about 70 percent of the financial sector are aware of the threat, and therefore, incidents result in few losses, more than 35 percent of service denial attempts are identified as malware soon after the threat is sent out (Divyavani and Dileep Kumar Reddy, 2018). Intensive packet analysis (IPA) explores the packet headers from Layers 2 to 4 along with the application information layer from Layers 5 to 7 for locating and evading vulnerable network-related threats. The networked systems could be simply contained by low potent service denial operations in case the supplies of the systems are minimized by the safety modules. The paper aims to discuss these issues.

Design/methodology/approach

The initial feature will be resolved using the IPDME by locating the standard precise header delimiters such as carriage return line feed equally locating the header names. For the designed IPDME, the time difficulties in locating the initial position of the header field within a packet with static time expenses of four cycles. For buffering packets, the framework functions at the speed of cables. Soon after locating the header position, the value of the field is mined linearly from the position. Mining all the field values consequentially resolves the forthcoming restrictions which could be increased by estimating various information bytes per cycle and omitting non-required information packets. In this way, the exploration space is minimized from the packet length to the length of the header. Because of the minimized mining time, the buffered packets could be operated at an increasing time.

Findings

Based on the assessments of IPDME against broadly employed SIP application layer function tools it discloses hardware offloading of IPDME it could minimize the loads on the essential system supplies of about 25 percent. The IPDME reveals that the acceleration of 22X– 75X as evaluated against PJSIP parser and SNORT SIP pre-processor. One IPDME portrays an acceleration of 4X–6X during 12 occurrences of SNORT parsers executing on 12 processors. The IPDME accomplishes 3X superior to 200 parallel occurrences of GPU speeded up processors. Additionally, the IPDME has very minimal latencies with 12X–1,010X minimal than GPUs. IPDME accomplishes minimal energy trails of nearly 0.75 W using two engines and for 15 engines it is 3.6 W, which is 22.5X–100X less as evaluated to the graphic-based GPU speeding up.

Originality/value

IPDME assures that the system pools are not fatigued on Layer 7 mining by transmitting straightforwardly based on network intrusions without branching into the operating systems. IPDME averts the latencies because of the memory accesses by sidestepping the operating system which essentially permits the scheme to function at wired speed. Based on the safety perception, IPDME ultimately enhances the performance of the safety systems employing them. The increased bandwidth of the IPDME assures that the IPA’s could function at their utmost bandwidth. The service time for the threat independent traffic is enhanced because of minimization over the comprehensive latencies over the path among the network intrusions and the related applications.

Details

International Journal of Intelligent Unmanned Systems, vol. 7 no. 4
Type: Research Article
ISSN: 2049-6427

Keywords

To view the access options for this content please click here
Article
Publication date: 7 June 2013

Pablo Carballude González

It is increasingly difficult to ignore the importance of anonymity on the internet. Tor has been proposed as a reliable way to keep our identity secret from governments…

Abstract

Purpose

It is increasingly difficult to ignore the importance of anonymity on the internet. Tor has been proposed as a reliable way to keep our identity secret from governments and organizations. This research evaluates its ability to protect our activity on the Web.

Design/methodology/approach

Using traffic analysis over ACK packets among others, fingerprints of websites can be created and later on used to recognise Tor traffic.

Findings

Tor does not add enough entropy to HTTP traffic, which allows us to recognise the access to static websites without breaking Tor's cryptography.

Research limitations/implications

This work shows that the method presented behaves well with a limited set of fingerprints. Further research should be performed on its reliability with larger sets.

Social implications

Tor has been used by political dissidents and citizens in countries without freedom of speech to access banned websites such as Twitter or Facebook. This paper shows that it might be possible for their countries to know what they have done.

Originality/value

This paper shows that while Tor does a good work keeping the content of our communication, it is weak protecting the identity of the website being accessed.

Details

Information Management & Computer Security, vol. 21 no. 2
Type: Research Article
ISSN: 0968-5227

Keywords

To view the access options for this content please click here
Article
Publication date: 13 November 2017

Christian Fuchs and Daniel Trottier

This paper aims to present results of a study that focused on the question of how computer and data experts think about Internet and social media surveillance after Edward…

Abstract

Purpose

This paper aims to present results of a study that focused on the question of how computer and data experts think about Internet and social media surveillance after Edward Snowden’s revelations about the existence of mass-surveillance systems of the Internet such as Prism, XKeyscore and Tempora. Computer and data experts’ views are of particular relevance because they are confronted day by day with questions about the processing of personal data, privacy and data protection.

Design/methodology/approach

The authors conducted two focus groups with a total of ten experts based in London. As London is considered by some as the surveillance capital of the world, and has a thriving Internet industry, it provided a well-suited context.

Findings

The focus group discussions featured three topics that are of crucial importance for understanding Internet and social media surveillance: the political economy surveillance in general; surveillance in the context of the Snowden revelations; and the question what the best political reactions are to the existence of a surveillance-industrial complex that results in political and economic control of the Internet and social media. The focus groups provided indications that computer and data experts are pre-eminently informed on how Internet surveillance works, are capable of critically assessing its implications for society and have ideas about on what should be done politically.

Originality/value

Studies of privacy and surveillance after Edward Snowden’s revelations have taken on a new dimension: Large-scale covert surveillance is conducted in a collaborative endeavour of secret services, private communications corporations and security companies. It has become evident that a surveillance-industrial Internet surveillance complex exists, in which capitalist communications and security corporations and state institutions collaborate.

Details

Journal of Information, Communication and Ethics in Society, vol. 15 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

1 – 10 of 471