Search results

1 – 10 of over 17000
Article
Publication date: 28 June 2021

Xianke Sun, Gaoliang Wang, Liuyang Xu and Honglei Yuan

In data grids, replication has been regarded as a crucial optimization strategy. Computing tasks are performed on IoT gateways at the cloud edges to obtain a prompt response. So…

Abstract

Purpose

In data grids, replication has been regarded as a crucial optimization strategy. Computing tasks are performed on IoT gateways at the cloud edges to obtain a prompt response. So, investigating the data replication mechanisms in the IoT is necessary. Henceforth, a systematic survey of data replication strategies in IoT techniques is presented in this paper, and some suggestions are offered for the upcoming works. In two key classifications, various parameters dependent on the analysis of the prevalent approaches are considered. The pros and cons associated with chosen strategies have been explored, and the essential problems of them have been presented to boost the future of more effective data replication strategies. We have also discovered gaps in papers and provided solutions for them.

Design/methodology/approach

Progress in Information Technology (IT) growth has brought the Internet of Things (IoT) into life to take a vital role in our everyday lifestyles. Big IoT-generated data brings tremendous data processing challenges. One of the most challenging problems is data replication to improve fault-tolerance, reliability, and accessibility. In this way, if the primary data source fails, a replica can be swapped in immediately. There is a significant influence on the IoT created by data replication techniques, but no extensive and systematic research exists in this area. There is still no systematic and full way to address the relevant methods and evaluate them. Hence, in the present investigation, a literature review is indicated on the IoT-based data replication from papers published until 2021. Based on the given guidelines, chosen papers are reviewed. After establishing exclusion and inclusion criteria, an independent systematic search in Google Scholar, ACM, Scopus, Eric, Science Direct, Springer link, Emerald, Global ProQuest, and IEEE for relevant studies has been performed, and 21(6 paper analyzed in section 1 and 15 paper analyzed in section 3) papers have been analyzed.

Findings

The results showed that data replication mechanisms in the IoT algorithms outperform other algorithms regarding impressive network utilization, job implementation time, hit ratio, total replication number, and the portion of utilized storage in percentage. Although a few ideas have been suggested that fix different facets of IoT data management, we predict that there is still space for development and more study. Thus, in order to design innovative and more effective methods for future IoT-based structures, we explored open research directions in the domain of efficient data processing.

Research limitations/implications

The present investigation encountered some drawbacks. First of all, only certain papers published in English were included. It is evident that some papers exist on data replication processes in the IoT written in other languages, but they were not included in our research. Next, the current report has only analyzed the mined based on data replication processes and IoT keyword discovery. The methods for data replication in the IoT would not be printed with keywords specified. In this review, the papers presented in national conferences and journals are neglected. In order to achieve the highest ability, this analysis contains papers from major global academic journals.

Practical implications

To appreciate the significance and accuracy of the data often produced by different entities, the article illustrates that data provenance is essential. The results contribute to providing strong suggestions for future IoT studies. To be able to view the data, administrators have to modify novel abilities. The current analysis will deal with the speed of publications and suggest the findings of research and experience as a future path for IoT data replication decision-makers.

Social implications

In general, the rise in the knowledge degree of scientists, academics, and managers will enhance administrators' positive and consciously behavioral actions in handling IoT environments. We anticipate that the consequences of the present report could lead investigators to produce more efficient data replication methods in IoT regarding the data type and data volume.

Originality/value

This report provides a detailed literature review on data replication strategies relying on IoT. The lack of such papers increases the importance of this paper. Utilizing the responses to the study queries, data replication's primary purpose, current problems, study concepts, and processes in IoT are summarized exclusively. This approach will allow investigators to establish a more reliable IoT technique for data replication in the future. To the best of our understanding, our research is the first to provide a thorough overview and evaluation of the current solutions by categorizing them into static/dynamic replication and distributed replication subcategories. By outlining possible future study paths, we conclude the article.

Details

Library Hi Tech, vol. 39 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 18 October 2019

A. Abdollahi Nami and L. Rajabion

A mobile ad hoc network (MANET) enables providers and customers to communicate without a fixed infrastructure. Databases are extended on MANETs to have easy data access and…

Abstract

Purpose

A mobile ad hoc network (MANET) enables providers and customers to communicate without a fixed infrastructure. Databases are extended on MANETs to have easy data access and update. As the energy and mobility limitations of both servers and clients affect the availability of data in MANETs, these data are replicated. The purpose of this paper is to provide a literature review of data replication issues and classify the available strategies based on the issues they addressed.

Design/methodology/approach

The selected articles are reviewed based on the defined criteria. Also, the differences, the advantages and disadvantages of these techniques are described. The methods in the literature can be categorized into three groups, including cluster-based, location-based and group-based mechanisms.

Findings

High flexibility and data consistency are the features of cluster-based mechanisms. The location-based mechanisms are also appropriate for replica allocation, and they mostly have low network traffic and delay. Also, the group-based mechanism has high data accessibility compared to other mechanisms. Data accessibility and time have got more attention to data replication techniques. Scalability as an important parameter must be considered more in the future. The reduction of storage cost in MANETs is the main goal of data replication. Researchers have to consider the cost parameter when another parameter will be influenced.

Research limitations/implications

Data replication in MANETs has been covered in different available sources such as Web pages, technical reports, academic publications and editorial notes. The articles published in national journals and conferences are ignored in this study. This study includes articles from academic main international journals to get the best capability.

Originality/value

The paper reviews the past and the state-of-the-art mechanisms in data replication in MANET. Exclusively, data replication’s main goal, existing challenges, research terminologies and mechanisms in MANET are summarized using the answers to the research questions. This method will help researchers in the future to develop more effective data replication method in MANET.

Details

International Journal of Pervasive Computing and Communications, vol. 15 no. 3/4
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 16 April 2018

Masoud Nosrati and Mahmood Fazlali

One of the techniques for improving the performance of distributed systems is data replication, wherein new replicas are created to provide more accessibility, fault tolerance and…

Abstract

Purpose

One of the techniques for improving the performance of distributed systems is data replication, wherein new replicas are created to provide more accessibility, fault tolerance and lower access cost of the data. In this paper, the authors propose a community-based solution for the management of data replication, based on the graph model of communication latency between computing and storage nodes. Communities are the clusters of nodes that the communication latency between the nodes are minimum values. The purpose of this study if to, by using this method, minimize the latency and access cost of the data.

Design/methodology/approach

This paper used the Louvain algorithm for finding the best communities. In the proposed algorithm, by requesting a file according to the nodes of each community, the cost of accessing the file located out of the applicant’s community was calculated and the results were accumulated. On exceeding the accumulated costs from a specified threshold, a new replica of the file was created in the applicant’s community. Besides, the number of replicas of each file should be limited to prevent the system from creating useless and redundant data.

Findings

To evaluate the method, four metrics were introduced and measured, including communication latency, response time, data access cost and data redundancy. The results indicated acceptable improvement in all of them.

Originality/value

So far, this is the first research that aims at managing the replicas via community detection algorithms. It opens many opportunities for further studies in this area.

Details

International Journal of Web Information Systems, vol. 14 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Content available
Article
Publication date: 25 October 2023

Lisa Ogilvie and Jerome Carson

The purpose of this study is to see if the affirmative results seen in the pilot study of the positive addiction recovery therapy (PART) programme are replicable and durable given…

Abstract

Purpose

The purpose of this study is to see if the affirmative results seen in the pilot study of the positive addiction recovery therapy (PART) programme are replicable and durable given a new cohort of participants. PART is a programme of work designed to improve the recovery and well-being of people in early addiction recovery. Its foundation is in the G-CHIME (growth, connectedness, hope, identity, meaning in life and empowerment) model of addiction recovery. It also uses the values in action character strengths and includes a set of recovery protection techniques.

Design/methodology/approach

This study uses a mixed method experimental design, incorporating direct replication and a follow-up study. Measures for recovery capital, well-being and level of flourishing are used to collect pre-, post- and one-month follow-up data from participants. The replication data analysis uses the non-parametric Wilcoxon test, and the follow-up analysis uses the Friedman test with pairwise comparison post hoc analysis. The eligibility criteria ensure participants (n = 35) are all in early addiction recovery, classified as having been abstinent for between three and six months.

Findings

This study found a statistically significant improvement in well-being, recovery capital and flourishing on completion of the PART programme. These findings upheld the hypotheses in the pilot study and the successful results reported. It also found these gains to be sustained at a one-month follow-up.

Practical implications

This study endorses the efficacy of the PART programme and its continued use in a clinical setting. It also adds further credibility to adopting a holistic approach when delivering interventions which consider important components of addiction recovery such as those outlined in the G-CHIME model.

Originality/value

This study adds to the existing evidence base endorsing the PART programme and the applied use of the G-CHIME model.

Details

Advances in Dual Diagnosis, vol. 16 no. 4
Type: Research Article
ISSN: 1757-0972

Keywords

Article
Publication date: 3 April 2009

Jehn‐Ruey Jiang, Chung‐Ta King, Chi‐Shiang Liao and Ching‐Hao Liu

The purpose of this paper is to propose MUREX, a mutable replica control scheme, to keep one‐copy equivalence for synchronous replication in structured peer‐to‐peer (P2P) storage…

Abstract

Purpose

The purpose of this paper is to propose MUREX, a mutable replica control scheme, to keep one‐copy equivalence for synchronous replication in structured peer‐to‐peer (P2P) storage systems.

Design/methodology/approach

For synchronous replication in P2P networks, it is proper to adopt crash‐recovery as the fault model; that is, nodes are fail‐stop and can recover and rejoin the system after synchronizing their states with other active nodes. In addition to the state synchronization problem, the paper identifies other two problems to solve for synchronous replication in P2P storage systems. They are the replica acquisition and the replica migration problems.

Findings

On the basis of multi‐column read/write quorums, MUREX conquers the problems by the replica pointer, the on‐demand replica regeneration, and the leased lock techniques.

Originality/value

The paper proves the correctness of MUREX, analyzes and also simulates it in terms of communication cost and operation success rate.

Details

International Journal of Pervasive Computing and Communications, vol. 5 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 23 March 2023

Mert Gülçür, Kevin Couling, Vannessa Goodship, Jérôme Charmet and Gregory J. Gibbons

The purpose of this study is to demonstrate and characterise a soft-tooled micro-injection moulding process through in-line measurements and surface metrology using a data

Abstract

Purpose

The purpose of this study is to demonstrate and characterise a soft-tooled micro-injection moulding process through in-line measurements and surface metrology using a data-intensive approach.

Design/methodology/approach

A soft tool for a demonstrator product that mimics the main features of miniature components in medical devices and microsystem components has been designed and fabricated using material jetting technique. The soft tool was then integrated into a mould assembly on the micro-injection moulding machine, and mouldings were made. Sensor and data acquisition devices including thermal imaging and injection pressure sensing have been set up to collect data for each of the prototypes. Off-line dimensional characterisation of the parts and the soft tool have also been carried out to quantify the prototype quality and dimensional changes on the soft tool after the manufacturing cycles.

Findings

The data collection and analysis methods presented here enable the evaluation of the quality of the moulded parts in real-time from in-line measurements. Importantly, it is demonstrated that soft-tool surface temperature difference values can be used as reliable indicators for moulding quality. Reduction in the total volume of the soft-tool moulding cavity was detected and quantified up to 100 cycles. Data collected from in-line monitoring was also used for filling assessment of the soft-tool moulding cavity, providing about 90% accuracy in filling prediction with relatively modest sensors and monitoring technologies.

Originality/value

This work presents a data-intensive approach for the characterisation of soft-tooled micro-injection moulding processes for the first time. The overall results of this study show that the product-focussed data-rich approach presented here proved to be an essential and useful way of exploiting additive manufacturing technologies for soft-tooled rapid prototyping and new product introduction.

Article
Publication date: 9 September 2014

Wolfgang Zenk-Möltgen and Greta Lepthien

Data sharing is key for replication and re-use in empirical research. Scientific journals can play a central role by establishing data policies and providing technologies. The…

2719

Abstract

Purpose

Data sharing is key for replication and re-use in empirical research. Scientific journals can play a central role by establishing data policies and providing technologies. The purpose of this paper is to analyses the factors which influence data sharing by investigating journal data policies and the behaviour of authors in sociology.

Design/methodology/approach

The web sites of 140 sociology journals were consulted to check their data policy. The results are compared with similar studies from political science and economics. A broad selection of articles published in five selected journals over a period of two years are examined to determine whether authors really cite and share their data and the factors which are related to this.

Findings

Although only a few sociology journals have explicit data policies, most journals make reference to a common policy supplied by their association of publishers. Among the journals selected, relatively few articles provide data citations and even fewer make data available – this is true both for journals with and without a data policy. But authors writing for journals with higher impact factors and with data policies are more likely to cite data and to make it really accessible.

Originality/value

No study of journal data policies has been undertaken to date for the domain of sociology. A comparison of authors’ behaviours regarding data availability, data citation, and data accessibility for journals with or without a data policy provides useful information about the factors which improve data sharing.

Details

Online Information Review, vol. 38 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 2 February 2015

Milorad Pantelija Stevic, Branko Milosavljevic and Branko Rade Perisic

Current e-learning platforms are based on relational database management systems (RDBMS) and are well suited for handling structured data. However, it is expected from e-learning…

1300

Abstract

Purpose

Current e-learning platforms are based on relational database management systems (RDBMS) and are well suited for handling structured data. However, it is expected from e-learning solutions to efficiently handle unstructured data as well. The purpose of this paper is to show an alternative to current solutions for unstructured data management.

Design/methodology/approach

Current repository-based solution for file management was compared to MongoDB architecture according to their functionalities and characteristics. This included several categories: data integrity, hardware acquisition, processing files, availability, handling concurrent users, partition tolerance, disaster recovery, backup policies and scalability.

Findings

This paper shows that it is possible to improve e-learning platform capabilities by implementing a hybrid database architecture that incorporates RDBMS for handling structured data and MongoDB database system for handling unstructured data.

Research limitations/implications

The study shows an acceptable adoption of MongoDB inside a service-oriented architecture (SOA) for enhancing e-learning solutions.

Practical implications

This research enables an efficient file handling not only for e-learning systems, but also for any system where file handling is needed.

Originality/value

It is expected that future single/joint e-learning initiatives will need to manage huge amount of files and they will require effective file handling solution. The new architecture solution for file handling is offered in this paper: it is different from current solutions because it is less expensive, more efficient, more flexible and requires less administrative and development effort for building and maintaining.

Details

Program, vol. 49 no. 1
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 4 December 2017

Alistair Brandon-Jones

Despite significant investment in e-procurement by many organisations, perceived failings in the quality of such technologies and of the support provided to use them – termed here…

2347

Abstract

Purpose

Despite significant investment in e-procurement by many organisations, perceived failings in the quality of such technologies and of the support provided to use them – termed here e-procurement quality – continue to generate resistance from internal customers who must assimilate e-procurement into their daily routines. Hence, the purpose of this paper is to advance the understanding of e-procurement quality from an internal customer perspective and to develop, refine, and validate construct measures.

Design/methodology/approach

Research was undertaken in the UK and the Netherlands incorporating a literature review, a qualitative study with 58 interviews, a quantitative study with 274 survey respondents, and a replication study with 154 survey respondents.

Findings

Analysis reveals that e-procurement quality comprises five universally applicable dimensions: processing, content, usability, professionalism, and training. A sixth dimension, specification, appears to be applicable, but context specific.

Originality/value

The study represents one of the most extensive investigations of e-procurement quality to date and is the first to examine its underlying dimensional structure. The multi-item scales developed and validated using a mixed-methods process are suitable for theory building and testing, as well as providing useful diagnostic value to practitioners.

Details

International Journal of Operations & Production Management, vol. 37 no. 12
Type: Research Article
ISSN: 0144-3577

Keywords

Article
Publication date: 21 September 2015

Moumita Das, Jack C.P. Cheng and Kincho H. Law

The purpose of this paper is to present a framework for integrating construction supply chain in order to resolve the data heterogeneity and data sharing problems in the…

1450

Abstract

Purpose

The purpose of this paper is to present a framework for integrating construction supply chain in order to resolve the data heterogeneity and data sharing problems in the construction industry.

Design/methodology/approach

Standardized web service technology is used in the proposed framework for data specification, transfer, and integration. Open standard SAWSDL is used to annotate web service descriptions with pointers to concepts defined in ontologies. NoSQL database Cassandra is used for distributed data storage among construction supply chain stakeholders.

Findings

Ontology can be used to support heterogeneous data transfer and integration through web services. Distributed data storage facilitates data sharing and enhances data control.

Practical implications

This paper presents examples of two ontologies for expressing construction supply chain information – ontology for material and ontology for purchase order. An example scenario is presented to demonstrate the proposed web service framework for material procurement process involving three parties, namely, project manager, contractor, and material supplier.

Originality/value

The use of web services is not new to construction supply chains (CSCs). However, it still faces problems in channelizing information along CSCs due to data heterogeneity. Trust issue is also a barrier to information sharing for integrating supply chains in a centralized collaboration system. In this paper, the authors present a web service framework, which facilitates storage and sharing of information on a distributed manner mediated through ontology-based web services. Security is enhanced with access control. A data model for the distributed databases is also presented for data storage and retrieval.

Details

Engineering, Construction and Architectural Management, vol. 22 no. 5
Type: Research Article
ISSN: 0969-9988

Keywords

1 – 10 of over 17000