Search results

1 – 10 of over 26000
To view the access options for this content please click here
Article
Publication date: 9 June 2020

Hanna M. Kreitem and Massimo Ragnedda

This paper aims to look at shifts in internet-related content and services economies, from audience labour economies to Web 2.0 user-generated content, and the emerging…

Abstract

Purpose

This paper aims to look at shifts in internet-related content and services economies, from audience labour economies to Web 2.0 user-generated content, and the emerging model of user computing power utilisation, powered by blockchain technologies. The authors look at and test three models of user computing power utilisation based on distributed computing (Coinhive, Cryptotab and Gridcoin) two of which use cryptocurrency mining through distributed pool mining techniques, while the third is based on distributed computing of calculations for scientific research. The three models promise benefits to their users, which the authors discuss throughout the paper, studying how they interplay with the three levels of the digital divide.

Design/methodology/approach

The goal of this article is twofold as follows: first to discuss how using the mining hype may reduce digital inequalities, and secondly to demonstrate how these services offer a new business model based on value rewarding in exchange for computational power, which would allow more online opportunities for people, and thus reduce digital inequalities. Finally, this contribution discusses and proposes a method for a fair revenue model for content and online service providers that uses user device computing resources or computational power, rather than their data and attention. The method is represented by a model that allows for consensual use of user computing resources in exchange for accessing content and using software tools and services, acting essentially as an alternative online business model.

Findings

Allowing users to convert their devices’ computational power into value, whether through access to services or content or receiving cryptocurrency and payments in return for providing services or content or direct computational powers, contributes to bridging digital divides, even at fairly small levels. Secondly, the advent of blockchain technologies is shifting power relations between end-users and content developers and service providers and is a necessity for the decentralisation of internet and internet services.

Originality/value

The article studies the effect of services that rely on distributed computing and mining on digital inequalities, by looking at three different case studies – Coinhive, Gridcoin and Cryptotab – that promise to provide value in return for using computing resources. The article discusses how these services may reduce digital inequalities by affecting the three levels of the digital divide, namely, access to information and communication technologies (ICTs) (first level), skills and motivations in using ICTs (second level) and capacities in using ICTs to get concrete benefits (third level).

Details

Journal of Information, Communication and Ethics in Society, vol. 18 no. 3
Type: Research Article
ISSN: 1477-996X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 June 1976

B.M. Doouss and G.L. Collins

This monograph defines distributed intelligence and discusses the relationship of distributed intelligence to data base, justifications for using the technique, and the…

Downloads
62

Abstract

This monograph defines distributed intelligence and discusses the relationship of distributed intelligence to data base, justifications for using the technique, and the approach to successful implementation of the technique. The approach is then illustrated by reference to a case study of experience in Birds Eye Foods. The planning process by which computing strategy for the company was decided is described, and the planning conclusions reached to date are given. The current state of development in the company is outlined and the very real savings so far achieved are specified. Finally, the main conclusions of the monograph are brought together. In essence these conclusions are that major savings are achievable using distributed intelligence, and that the implementation of a company data processing plan can be made quicker and simpler by its use. However, careful central control must be maintained so as to avoid fragmentation of machine, language skills, and application taking place.

Details

Management Decision, vol. 14 no. 6
Type: Research Article
ISSN: 0025-1747

To view the access options for this content please click here
Article
Publication date: 1 June 2003

Jaroslav Mackerle

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view…

Downloads
1012

Abstract

This paper gives a bibliographical review of the finite element and boundary element parallel processing techniques from the theoretical and application points of view. Topics include: theory – domain decomposition/partitioning, load balancing, parallel solvers/algorithms, parallel mesh generation, adaptive methods, and visualization/graphics; applications – structural mechanics problems, dynamic problems, material/geometrical non‐linear problems, contact problems, fracture mechanics, field problems, coupled problems, sensitivity and optimization, and other problems; hardware and software environments – hardware environments, programming techniques, and software development and presentations. The bibliography at the end of this paper contains 850 references to papers, conference proceedings and theses/dissertations dealing with presented subjects that were published between 1996 and 2002.

Details

Engineering Computations, vol. 20 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Abstract

Details

Integrated Land-Use and Transportation Models
Type: Book
ISBN: 978-0-080-44669-1

To view the access options for this content please click here
Article
Publication date: 20 December 2007

Tay Teng Tiow, Chu Yingyi and Sun Yang

To utilize the idle computational resources in a network to collectively solve middle to large problems, this paper aims to propose an integrated distributed computing

Abstract

Purpose

To utilize the idle computational resources in a network to collectively solve middle to large problems, this paper aims to propose an integrated distributed computing platform, Java distributed code generating and computing (JDGC).

Design/methodology/approach

The proposed JDGC is fully decentralized in that every participating host is identical in function. It allows standard, single machine‐oriented Java programs to be transparently executed in a distributed system. The code generator reduces the communication overhead between runtime objects based on a detailed analysis of the communication affinities between them.

Findings

The experimental results show that JDGC can efficiently reduce the execution time of applications by utilizing the networked computational resources.

Originality/value

JDGC releases the developers from any special programming considerations for distributed environment, and solves the portability problem of using system‐specific programming methods.

Details

International Journal of Pervasive Computing and Communications, vol. 3 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 1 March 2001

David Finkel, Craig E. Wills, Michael J. Ciaraldi, Kevin Amorin, Adam Covati and Michael Lee

Anonymous distributed computing systems consist of potentially millions of heterogeneous processing nodes connected by the global Internet. These nodes can be…

Abstract

Anonymous distributed computing systems consist of potentially millions of heterogeneous processing nodes connected by the global Internet. These nodes can be administered by thousands of organizations and individuals, with no direct knowledge of each other. This work defines anonymous distributed computing systems in general then focuses on the specifics of an applet‐based approach for large‐scale, anonymous, distributed computing on the Internet. A user wishing to participate in a computation connects to a distribution server, which provides information about available computations, and then connects to a computation server with a computation to distribute. A Java class is downloaded, which communicates with the computation server to obtain data, performs the computation, and returns the result. Since any computer on the Internet can participate in these computations, potentially a large number of computers can participate in a single computation.

Details

Internet Research, vol. 11 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

To view the access options for this content please click here
Article
Publication date: 6 June 2016

Ema Kusen and Mark Strembeck

Ever since Mark Weiser coined the term “ubiquitous computing” (ubicomp) in 1988, there has been a general interest in proposing various solutions that would support his…

Abstract

Purpose

Ever since Mark Weiser coined the term “ubiquitous computing” (ubicomp) in 1988, there has been a general interest in proposing various solutions that would support his vision. However, attacks targeting devices and services of a ubicomp environment have demonstrated not only different privacy issues, but also a risk of endangering user’s life (e.g. by modifying medical sensor readings). Thus, the aim of this paper is to provide a comprehensive overview of security challenges of ubicomp environments and the corresponding countermeasures proposed over the past decade.

Design/methodology/approach

The results of this paper are based on a literature review method originally used in evidence-based medicine called systematic literature review (SLR), which identifies, filters, classifies and summarizes the findings.

Findings

Starting from the bibliometric results that clearly show an increasing interest in the topic of ubicomp security worldwide, the findings reveal specific types of attacks and vulnerabilities that have motivated the research over the past decade. This review describes most commonly proposed countermeasures – context-aware access control and authentication mechanisms, cryptographic protocols that account for device’s resource constraints, privacy-preserving mechanisms, and trust mechanisms for wireless ad hoc and sensor networks.

Originality/value

To the best of our knowledge, this is the first SLR on security challenges in ubicomp. The findings should serve as a reference to an extensive list of scientific contributions, as well as a guiding point for the researchers’ novel to the security research in ubicomp.

Details

International Journal of Pervasive Computing and Communications, vol. 12 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 9 October 2019

Elham Ali Shammar and Ammar Thabit Zahary

Internet has changed radically in the way people interact in the virtual world, in their careers or social relationships. IoT technology has added a new vision to this…

Downloads
3525

Abstract

Purpose

Internet has changed radically in the way people interact in the virtual world, in their careers or social relationships. IoT technology has added a new vision to this process by enabling connections between smart objects and humans, and also between smart objects themselves, which leads to anything, anytime, anywhere, and any media communications. IoT allows objects to physically see, hear, think, and perform tasks by making them talk to each other, share information and coordinate decisions. To enable the vision of IoT, it utilizes technologies such as ubiquitous computing, context awareness, RFID, WSN, embedded devices, CPS, communication technologies, and internet protocols. IoT is considered to be the future internet, which is significantly different from the Internet we use today. The purpose of this paper is to provide up-to-date literature on trends of IoT research which is driven by the need for convergence of several interdisciplinary technologies and new applications.

Design/methodology/approach

A comprehensive IoT literature review has been performed in this paper as a survey. The survey starts by providing an overview of IoT concepts, visions and evolutions. IoT architectures are also explored. Then, the most important components of IoT are discussed including a thorough discussion of IoT operating systems such as Tiny OS, Contiki OS, FreeRTOS, and RIOT. A review of IoT applications is also presented in this paper and finally, IoT challenges that can be recently encountered by researchers are introduced.

Findings

Studies of IoT literature and projects show the disproportionate importance of technology in IoT projects, which are often driven by technological interventions rather than innovation in the business model. There are a number of serious concerns about the dangers of IoT growth, particularly in the areas of privacy and security; hence, industry and government began addressing these concerns. At the end, what makes IoT exciting is that we do not yet know the exact use cases which would have the ability to significantly influence our lives.

Originality/value

This survey provides a comprehensive literature review on IoT techniques, operating systems and trends.

Details

Library Hi Tech, vol. 38 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Article
Publication date: 16 April 2018

Masoud Nosrati and Mahmood Fazlali

One of the techniques for improving the performance of distributed systems is data replication, wherein new replicas are created to provide more accessibility, fault…

Abstract

Purpose

One of the techniques for improving the performance of distributed systems is data replication, wherein new replicas are created to provide more accessibility, fault tolerance and lower access cost of the data. In this paper, the authors propose a community-based solution for the management of data replication, based on the graph model of communication latency between computing and storage nodes. Communities are the clusters of nodes that the communication latency between the nodes are minimum values. The purpose of this study if to, by using this method, minimize the latency and access cost of the data.

Design/methodology/approach

This paper used the Louvain algorithm for finding the best communities. In the proposed algorithm, by requesting a file according to the nodes of each community, the cost of accessing the file located out of the applicant’s community was calculated and the results were accumulated. On exceeding the accumulated costs from a specified threshold, a new replica of the file was created in the applicant’s community. Besides, the number of replicas of each file should be limited to prevent the system from creating useless and redundant data.

Findings

To evaluate the method, four metrics were introduced and measured, including communication latency, response time, data access cost and data redundancy. The results indicated acceptable improvement in all of them.

Originality/value

So far, this is the first research that aims at managing the replicas via community detection algorithms. It opens many opportunities for further studies in this area.

Details

International Journal of Web Information Systems, vol. 14 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

To view the access options for this content please click here
Article
Publication date: 14 November 2008

Salvatore Coco, Antonino Laudani and Giuseppe Pollicino

The paper's aim is to focus on the utilization of the GRID distributed computing environment in order to reduce simulation time for parameter studies of travelling wave…

Abstract

Purpose

The paper's aim is to focus on the utilization of the GRID distributed computing environment in order to reduce simulation time for parameter studies of travelling wave tube (TWT) electron guns and helix slow‐wave structures.

Design/methodology/approach

Two TWT finite‐element analysis modules were adapted to be run on the GRID, for this purpose scripts were written to submit a collection of independent jobs (the parameter study) to the GRID and collect the results.

Findings

A 25‐job electron gun parameter study runs on the GRID in 30‐40 min instead of 7 h locally. A 16‐job slow‐wave structure parameter study runs in 1 h on the GRID instead of 8 h locally. Turnaround time on the GRID was limited by priority levels presently set by GRID management for the various jobs submitted.

Practical implications

The procedures guarantee a remarkable reduction of the computing time.

Originality/value

For heavy‐computational cost tasks such as the above finite element electromagnetic calculations, the effective use of a heterogeneous, distributed, computing platform (the GRID computing platform) is very advantageous. The paper shows the development of new generation collaborative tools.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 27 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

1 – 10 of over 26000