Search results

1 – 10 of 85
Article
Publication date: 25 April 2024

Tulsi Pawan Fowdur and Ashven Sanghan

The purpose of this paper is to develop a blockchain-based data capture and transmission system that will collect real-time power consumption data from a household electrical…

Abstract

Purpose

The purpose of this paper is to develop a blockchain-based data capture and transmission system that will collect real-time power consumption data from a household electrical appliance and transfer it securely to a local server for energy analytics such as forecasting.

Design/methodology/approach

The data capture system is composed of two current transformer (CT) sensors connected to two different electrical appliances. The CT sensors send the power readings to two Arduino microcontrollers which in turn connect to a Raspberry-Pi for aggregating the data. Blockchain is then enabled onto the Raspberry-Pi through a Java API so that the data are transmitted securely to a server. The server provides real-time visualization of the data as well as prediction using the multi-layer perceptron (MLP) and long short term memory (LSTM) algorithms.

Findings

The results for the blockchain analysis demonstrate that when the data readings are transmitted in smaller blocks, the security is much greater as compared with blocks of larger size. To assess the accuracy of the prediction algorithms data were collected for a 20 min interval to train the model and the algorithms were evaluated using the sliding window approach. The mean average percentage error (MAPE) was used to assess the accuracy of the algorithms and a MAPE of 1.62% and 1.99% was obtained for the LSTM and MLP algorithms, respectively.

Originality/value

A detailed performance analysis of the blockchain-based transmission model using time complexity, throughput and latency as well as energy forecasting has been performed.

Details

Sensor Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 30 September 2021

Samuel Heuchert, Bhaskar Prasad Rimal, Martin Reisslein and Yong Wang

Major public cloud providers, such as AWS, Azure or Google, offer seamless experiences for infrastructure as a service (IaaS), platform as a service (PaaS) and software as a…

2372

Abstract

Purpose

Major public cloud providers, such as AWS, Azure or Google, offer seamless experiences for infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). With the emergence of the public cloud's vast usage, administrators must be able to have a reliable method to provide the seamless experience that a public cloud offers on a smaller scale, such as a private cloud. When a smaller deployment or a private cloud is needed, OpenStack can meet the goals without increasing cost or sacrificing data control.

Design/methodology/approach

To demonstrate these enablement goals of resiliency and elasticity in IaaS and PaaS, the authors design a private distributed system cloud platform using OpenStack and its core services of Nova, Swift, Cinder, Neutron, Keystone, Horizon and Glance on a five-node deployment.

Findings

Through the demonstration of dynamically adding an IaaS node, pushing the deployment to its physical and logical limits, and eventually crashing the deployment, this paper shows how the PackStack utility facilitates the provisioning of an elastic and resilient OpenStack-based IaaS platform that can be used in production if the deployment is kept within designated boundaries.

Originality/value

The authors adopt the multinode-capable PackStack utility in favor of an all-in-one OpenStack build for a true demonstration of resiliency, elasticity and scalability in a small-scale IaaS. An all-in-one deployment is generally used for proof-of-concept deployments and is not easily scaled in production across multiple nodes. The authors demonstrate that combining PackStack with the multi-node design is suitable for smaller-scale production IaaS and PaaS deployments.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 22 June 2022

Suvarna Abhijit Patil and Prasad Kishor Gokhale

With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network…

Abstract

Purpose

With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy.

Design/methodology/approach

With an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation.

Findings

Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0.

Originality/value

The concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Open Access
Article
Publication date: 19 May 2022

Akhilesh S Thyagaturu, Giang Nguyen, Bhaskar Prasad Rimal and Martin Reisslein

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long…

1039

Abstract

Purpose

Cloud computing originated in central data centers that are connected to the backbone of the Internet. The network transport to and from a distant data center incurs long latencies that hinder modern low-latency applications. In order to flexibly support the computing demands of users, cloud computing is evolving toward a continuum of cloud computing resources that are distributed between the end users and a distant data center. The purpose of this review paper is to concisely summarize the state-of-the-art in the evolving cloud computing field and to outline research imperatives.

Design/methodology/approach

The authors identify two main dimensions (or axes) of development of cloud computing: the trend toward flexibility of scaling computing resources, which the authors denote as Flex-Cloud, and the trend toward ubiquitous cloud computing, which the authors denote as Ubi-Cloud. Along these two axes of Flex-Cloud and Ubi-Cloud, the authors review the existing research and development and identify pressing open problems.

Findings

The authors find that extensive research and development efforts have addressed some Ubi-Cloud and Flex-Cloud challenges resulting in exciting advances to date. However, a wide array of research challenges remains open, thus providing a fertile field for future research and development.

Originality/value

This review paper is the first to define the concept of the Ubi-Flex-Cloud as the two-dimensional research and design space for cloud computing research and development. The Ubi-Flex-Cloud concept can serve as a foundation and reference framework for planning and positioning future cloud computing research and development efforts.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 17 October 2022

Santosh Kumar B. and Krishna Kumar E.

Deep learning techniques are unavoidable in a variety of domains such as health care, computer vision, cyber-security and so on. These algorithms demand high data transfers but…

50

Abstract

Purpose

Deep learning techniques are unavoidable in a variety of domains such as health care, computer vision, cyber-security and so on. These algorithms demand high data transfers but require bottlenecks in achieving the high speed and low latency synchronization while being implemented in the real hardware architectures. Though direct memory access controller (DMAC) has gained a brighter light of research for achieving bulk data transfers, existing direct memory access (DMA) systems continue to face the challenges of achieving high-speed communication. The purpose of this study is to develop an adaptive-configured DMA architecture for bulk data transfer with high throughput and less time-delayed computation.

Design/methodology/approach

The proposed methodology consists of a heterogeneous computing system integrated with specialized hardware and software. For the hardware, the authors propose an field programmable gate array (FPGA)-based DMAC, which transfers the data to the graphics processing unit (GPU) using PCI-Express. The workload characterization technique is designed using Python software and is implementable for the advanced risk machine Cortex architecture with a suitable communication interface. This module offloads the input streams of data to the FPGA and initiates the FPGA for the control flow of data to the GPU that can achieve efficient processing.

Findings

This paper presents an evaluation of a configurable workload-based DMA controller for collecting the data from the input devices and concurrently applying it to the GPU architecture, bypassing the hardware and software extraneous copies and bottlenecks via PCI Express. It also investigates the usage of adaptive DMA memory buffer allocation and workload characterization techniques. The proposed DMA architecture is compared with the other existing DMA architectures in which the performance of the proposed DMAC outperforms traditional DMA by achieving 96% throughput and 50% less latency synchronization.

Originality/value

The proposed gated recurrent unit has produced 95.6% accuracy in characterization of the workloads into heavy, medium and normal. The proposed model has outperformed the other algorithms and proves its strength for workload characterization.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 25 March 2024

Xiaoxia Zhang, Jin Zhang, Peiyan Du and Guohe Wang

In this paper, the brain potential changes caused by touching fabrics for handle evaluation were recorded by event related potential (ERP) method, compared with subjective…

Abstract

Purpose

In this paper, the brain potential changes caused by touching fabrics for handle evaluation were recorded by event related potential (ERP) method, compared with subjective evaluation scores and physical index of KES, explore the cognitive mechanism of the transformation of tactile sensation into neural impulses triggered by subtle mechanical stimuli such as material, texture, density and morphology in fabrics. By combining subjective evaluation of fabric tactile sensation, objective physical properties of fabrics and objective neurobiological signals, explore the neurophysiological mechanism of tactile cognition and the signal characteristics and time process of tactile information processing.

Design/methodology/approach

The ERP technology was first proposed by a British psychologist named Grey Walter. It is an imaging technique of noninvasive brain cognition, whose potential changes are related to the human physical and mental activities. ERP is different from electroencephalography (EEG) and evoked potentials (EP) on the fact that it cannot only record stimulated physical information which is transmitted to brain, but also response to the psychological activities which related to attention, identification, comparison, memory, judgment and cognition as well as to human’s neural physiological changes which are caused by cognitive process of the feeling by stimulation.

Findings

According to potential changes in the cerebral cortex evoked by touching four types of silk fabrics, human brain received the physical stimulation in the early stage (50 ms) of fabrics handle evaluation, and the P50 component amplitude showed negative correlation with fabric smoothness sensations. Around 200 ms after tactile stimulus onset, the amplitude of P200 component show positive correlation with the softness sensation of silk fabrics. The relationship between the amplitude of P300 and the sense of smoothness and softness need further evidence to proof.

Originality/value

In this paper, the brain potential changes caused by touching fabrics for handle evaluation were recorded by event related potential (ERP) method, compared with subjective evaluation scores and physical index of KES, the results shown that the maximum amplitude of P50 component evoked by fabric touching is related to the fabrics’ smoothness and roughness emotion, which means in the early stage processing of tactile sensation, the rougher fabrics could arouse more attention. In addition, the amplitude of P200 component shows positive correlation with the softness sensation of silk fabrics.

Details

International Journal of Clothing Science and Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 7 March 2024

Fulya Acikgoz, Nikolaos Stylos and Sophie Lythreatis

The purpose of this study synthesises the body of research revolving around blockchain technology (BCT) whilst drawing on the technology-organization-environment framework…

Abstract

Purpose

The purpose of this study synthesises the body of research revolving around blockchain technology (BCT) whilst drawing on the technology-organization-environment framework, resource-based theory and theory of constraints, to conceptualize capabilities (enablers) and constraints (barriers) of BCT in the hospitality and tourism (H&T) industry.

Design/methodology/approach

A systematic literature review of BCT in the hotel and tourism industry has been achieved through two databases, i.e. Scopus and Web of Science. From 544 articles selected between the years 2008 and 2023 (first quarter), a sample of 49 articles was used to structure existing research on this subject.

Findings

The findings of this systematic literature review of BCT in the H&T literature establish a solid groundwork for assessing the evolution of this research area over time. Findings are classified into two groups: capabilities (enablers) and constraints (barriers) of BCT based on publication year, different research methods, theoretical underpinnings and applicable contexts.

Originality/value

To the best of the authors’ knowledge, this is one of the first attempts to synthesize studies related to BCT in H&T research by combining three theoretical approaches. It serves as a foundation to evaluate the development of BCT studies in this field.

Details

International Journal of Contemporary Hospitality Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 18 September 2023

Mohammadreza Akbari

The purpose of this study is to examine how the implementation of edge computing can enhance the progress of the circular economy within supply chains and to address the…

Abstract

Purpose

The purpose of this study is to examine how the implementation of edge computing can enhance the progress of the circular economy within supply chains and to address the challenges and best practices associated with this emerging technology.

Design/methodology/approach

This study utilized a streamlined evaluation technique that employed Latent Dirichlet Allocation modeling for thorough content analysis. Extensive searches were conducted among prominent publishers, including IEEE, Elsevier, Springer, Wiley, MDPI and Hindawi, utilizing pertinent keywords associated with edge computing, circular economy, sustainability and supply chain. The search process yielded a total of 103 articles, with the keywords being searched specifically within the titles or abstracts of these articles.

Findings

There has been a notable rise in the volume of scholarly articles dedicated to edge computing in the circular economy and supply chain management. After conducting a thorough examination of the published papers, three main research themes were identified, focused on technology, optimization and circular economy and sustainability. Edge computing adoption in supply chains results in a more responsive, efficient and agile supply chain, leading to enhanced decision-making capabilities and improved customer satisfaction. However, the adoption also poses challenges, such as data integration, security concerns, device management, connectivity and cost.

Originality/value

This paper offers valuable insights into the research trends of edge computing in the circular economy and supply chains, highlighting its significant role in optimizing supply chain operations and advancing the circular economy by processing and analyzing real time data generated by the internet of Things, sensors and other state-of-the-art tools and devices.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 5 January 2024

Caroline Silva Araújo, Emerson de Andrade Marques Ferreira and Dayana Bastos Costa

Tracking physical resources at the construction site can generate information to support effective decision-making and building production control. However, the methods for…

Abstract

Purpose

Tracking physical resources at the construction site can generate information to support effective decision-making and building production control. However, the methods for conventional tracking usually offer low reliability. This study aims to propose the integrated Smart Twins 4.0 to track and manage metallic formworks used in cast-in-place concrete wall systems using internet of things (IoT) (operationalized by radio frequency identification [RFID]) and building information modeling (BIM), focusing on increasing quality and productivity.

Design/methodology/approach

Design science research is the research approach, including an exploratory study to map the constructive system, the integrated system development, an on-site pilot implementation in a residential project and a performance evaluation based on acquired data and the perception of the project’s production team.

Findings

In all rounds of requests, Smart Twins 4.0 registered and presented the status from the formworks and the work progress of buildings in complete correspondence with the physical progress providing information to support decision-making during operation. Moreover, analyses of the system infrastructure and implementation details can drive researchers regarding future IoT and BIM implementation in real construction sites.

Originality/value

The primary contribution is the system proposal, centralized into a mobile app that contains a Web-based virtual model to receive data in real time during construction phases and solve a real problem. The paper describes Smart Twins 4.0 development and its requirements for tracking physical resources considering theoretical and practical previous research regarding RFID, IoT and BIM.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Open Access
Article
Publication date: 26 October 2020

Mohammed S. Al-kahtani, Lutful Karim and Nargis Khan

Designing an efficient routing protocol that opportunistically forwards data to the destination node through nearby sensor nodes or devices is significantly important for an…

Abstract

Designing an efficient routing protocol that opportunistically forwards data to the destination node through nearby sensor nodes or devices is significantly important for an effective incidence response and disaster recovery framework. Existing sensor routing protocols are mostly not effective in such disaster recovery applications as the networks are affected (destroyed or overused) in disasters such as earthquake, flood, Tsunami and wildfire. These protocols require a large number of message transmissions to reestablish the clusters and communications that is not energy efficient and result in packet loss. This paper introduces ODCR - an energy efficient and reliable opportunistic density clustered-based routing protocol for such emergency sensor applications. We perform simulation to measure the performance of ODCR protocol in terms of network energy consumptions, throughput and packet loss ratio. Simulation results demonstrate that the ODCR protocol is much better than the existing TEEN, LEACH and LORA protocols in term of these performance metrics.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of 85