Search results

1 – 10 of over 16000
Article
Publication date: 3 July 2020

Azra Nazir, Roohie Naaz Mir and Shaima Qureshi

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud…

275

Abstract

Purpose

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.

Design/methodology/approach

This review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.

Findings

DL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.

Originality/value

To the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 17 September 2021

Sukumar Rajendran, Sandeep Kumar Mathivanan, Prabhu Jayagopal, Kumar Purushothaman Janaki, Benjula Anbu Malar Manickam Bernard, Suganya Pandy and Manivannan Sorakaya Somanathan

Artificial Intelligence (AI) has surpassed expectations in opening up different possibilities for machines from different walks of life. Cloud service providers are pushing. Edge

Abstract

Purpose

Artificial Intelligence (AI) has surpassed expectations in opening up different possibilities for machines from different walks of life. Cloud service providers are pushing. Edge computing reduces latency, improving availability and saving bandwidth.

Design/methodology/approach

The exponential growth in tensor processing unit (TPU) and graphics processing unit (GPU) combined with different types of sensors has enabled the pairing of medical technology with deep learning in providing the best patient care. A significant role of pushing and pulling data from the cloud, big data comes into play as velocity, veracity and volume of data with IoT assisting doctors in predicting the abnormalities and providing customized treatment based on the patient electronic health record (EHR).

Findings

The primary focus of edge computing is decentralizing and bringing intelligent IoT devices to provide real-time computing at the point of presence (PoP). The impact of the PoP in healthcare gains importance as wearable devices and mobile apps are entrusted with real-time monitoring and diagnosis of patients. The impact edge computing of the PoP in healthcare gains significance as wearable devices and mobile apps are entrusted with real-time monitoring and diagnosis of patients.

Originality/value

The utility value of sensors data improves through the Laplacian mechanism of preserved PII response to each query from the ODL. The scalability is at 50% with respect to the sensitivity and preservation of the PII values in the local ODL.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 29 May 2023

R. Dhanalakshmi, Monica Benjamin, Arunkumar Sivaraman, Kiran Sood and S. S. Sreedeep

Purpose: With this study, the authors aim to highlight the application of machine learning in smart appliances used in our day-to-day activities. This chapter focuses on analysing…

Abstract

Purpose: With this study, the authors aim to highlight the application of machine learning in smart appliances used in our day-to-day activities. This chapter focuses on analysing intelligent devices used in our daily lives to examine various machine learning models that can be applied to make an appliance ‘intelligent’ and discuss the different pros and cons of the implementation.

Methodology: Most smart appliances need machine learning models to decrypt the meaning and functioning behind the sensor’s data to execute accurate predictions and come to appropriate conclusions.

Findings: The future holds endless possibilities for devices to be connected in different ways, and these devices will be in our homes, offices, industries and even vehicles that can connect each other. The massive number of connected devices could congest the network; hence there is necessary to incorporate intelligence on end devices using machine learning algorithms. The connected devices that allow automatic control appliance driven by the user’s preference would avail itself to use the Network to communicate with devices close to its proximity or use other channels to liaise with external utility systems. Data processing is facilitated through edge devices, and machine learning algorithms can be applied.

Significance: This chapter overviews smart appliances that use machine learning at the edge. It highlights the effects of using these appliances and how they raise the overall living standards when smarter cities are introduced by integrating such devices.

Details

Smart Analytics, Artificial Intelligence and Sustainable Performance Management in a Global Digitalised Economy
Type: Book
ISBN: 978-1-80382-555-7

Keywords

Article
Publication date: 3 July 2020

Mohammad Khalid Pandit, Roohie Naaz Mir and Mohammad Ahsan Chishti

The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational…

Abstract

Purpose

The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the communication costs during distributed execution.

Design/methodology/approach

To realize this, the authors proposed a two-level neural network (NN)-based task scheduling system, where the first-level NN (feed-forward neural network/convolutional neural network [FFNN/CNN]) determines whether the data stream could be analyzed (executed) in the resource-constrained environment (edge/fog) or be directly forwarded to the cloud. The second-level NN ( RL module) schedules all the tasks sent by level 1 NN to fog layer, among the available fog devices. This real-time task assignment policy is used to minimize the total computational latency (makespan) as well as communication costs.

Findings

Experimental results indicated that the RL technique works better than the computationally infeasible greedy approach for task scheduling and the combination of RL and task clustering algorithm reduces the communication costs significantly.

Originality/value

The proposed algorithm fundamentally solves the problem of task scheduling in real-time fog-based IoT with best resource utilization, minimum makespan and minimum communication cost between the tasks.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 15 July 2022

Joy Iong-Zong Chen, Ping-Feng Huang and Chung Sheng Pi

Apart from, the smart edge computing (EC) robot (SECR) provides the tools to manage Internet of things (IoT) services in the edge landscape by means of real-world test-bed…

Abstract

Purpose

Apart from, the smart edge computing (EC) robot (SECR) provides the tools to manage Internet of things (IoT) services in the edge landscape by means of real-world test-bed designed in ECR. Eventually, based on the results from two experiments held in little constrained condition, such as the maximum data size is 2GB, the performance of the proposed techniques demonstrate the effectiveness, scalability and performance efficiency of the proposed IoT model.

Design/methodology/approach

Certainly, the proposed SECR is trying primarily to take over other traditional static robots in a centralized or distributed cloud environment. One aspect of representation of the proposed edge computing algorithms is due to challenge to slow down the consumption of time which happened in an artificial intelligence (AI) robot system. Thus, the developed SECR trained by tiny machine learning (TinyML) techniques to develop a decentralized and dynamic software environment.

Findings

Specifically, the waste time of SECR has actually slowed down when it is embedded with Edge Computing devices in the demonstration of data transmission within different paths. The TinyML is applied to train with image data sets for generating a framework running in the SECR for the recognition which has also proved with a second complete experiment.

Originality/value

The work presented in this paper is the first research effort, and which is focusing on resource allocation and dynamic path selection for edge computing. The developed platform using a decoupled resource management model that manages the allocation of micro node resources independent of the service provisioning performed at the cloud and manager nodes. Besides, the algorithm of the edge computing management is established with different path and pass large data to cloud and receive it. In this work which considered the SECR framework is able to perform the same function as that supports to the multi-dimensional scaling (MDS).

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 May 2024

Mikias Gugssa, Long Li, Lina Pu, Ali Gurbuz, Yu Luo and Jun Wang

Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However…

Abstract

Purpose

Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However, it is still challenging to implement automated safety monitoring methods in near real time or in a time-efficient manner in real construction practices. Therefore, this study developed a novel solution to enhance the time efficiency to achieve near-real-time safety glove detection and meanwhile preserve data privacy.

Design/methodology/approach

The developed method comprises two primary components: (1) transfer learning methods to detect safety gloves and (2) edge computing to improve time efficiency and data privacy. To compare the developed edge computing-based method with the currently widely used cloud computing-based methods, a comprehensive comparative analysis was conducted from both the implementation and theory perspectives, providing insights into the developed approach’s performance.

Findings

Three DL models achieved mean average precision (mAP) scores ranging from 74.92% to 84.31% for safety glove detection. The other two methods by combining object detection and classification achieved mAP as 89.91% for hand detection and 100% for glove classification. From both implementation and theory perspectives, the edge computing-based method detected gloves faster than the cloud computing-based method. The edge computing-based method achieved a detection latency of 36%–68% shorter than the cloud computing-based method in the implementation perspective. The findings highlight edge computing’s potential for near-real-time detection with improved data privacy.

Originality/value

This study implemented and evaluated DL-based safety monitoring methods on different computing infrastructures to investigate their time efficiency. This study contributes to existing knowledge by demonstrating how edge computing can be used with DL models (without sacrificing their performance) to improve PPE-glove monitoring in a time-efficient manner as well as maintain data privacy.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 14 June 2021

Shengpei Zhou, Zhenting Chang, Haina Song, Yuejiang Su, Xiaosong Liu and Jingfeng Yang

With the continuous technological development of automated driving and expansion of its application scope, the types of on-board equipment continue to be enriched and the…

Abstract

Purpose

With the continuous technological development of automated driving and expansion of its application scope, the types of on-board equipment continue to be enriched and the computing capabilities of on-board equipment continue to increase and corresponding applications become more diverse. As the applications need to run on on-board equipment, the requirements for the computing capabilities of on-board equipment become higher. Mobile edge computing is one of the effective methods to solve practical application problems in automated driving.

Design/methodology/approach

In this study, in accordance with practical requirements, this paper proposed an optimal resource management allocation method of autonomous-vehicle-infrastructure cooperation in a mobile edge computing environment and conducted an experiment in practical application.

Findings

The design of the road-side unit module and its corresponding real-time operating system task coordination in edge computing are proposed in the study, as well as the method for edge computing load integration and heterogeneous computing. Then, the real-time scheduling of highly concurrent computation tasks, adaptive computation task migration method and edge server collaborative resource allocation method is proposed. Test results indicate that the method proposed in this study can greatly reduce the task computing delay, and the power consumption generally increases with the increase of task size and task complexity.

Originality/value

The results showed that the proposed method can achieve lower power consumption and lower computational overhead while ensuring the quality of service for users, indicating a great application prospect of the method.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Book part
Publication date: 20 November 2023

Surjeet Dalal, Bijeta Seth and Magdalena Radulescu

Customers today expect businesses to cater to their individual needs by tailoring the products they purchase to their own preferences. The term “Industry 5.0” refers to a new wave…

Abstract

Customers today expect businesses to cater to their individual needs by tailoring the products they purchase to their own preferences. The term “Industry 5.0” refers to a new wave of manufacturing that aims to meet each customer's unique demands. Even while Industry 4.0 allowed for mass customization, that wasn't good enough before, customers today demand individualized products at scale, and Industry 5.0 is driving the transition from mass customization to mass personalization to meet these demands. It caters to the individual needs of each consumer by meeting their demands. More specialized components for use in medicine are made possible by the widespread customization made possible by Industry 5.0. These individualized parts are included into the medical care of the patient to meet their specific needs and preferences. In the current medical revolution, an enabling technology of Industry 5.0 can produce medical implants, artificial organs, bodily fluids, and transplants with pinpoint accuracy. With the advent of AI-enabled sensors, we now live in a world where data can be swiftly analyzed. Machines may be programmed to make complex choices on the fly. In the medical field, these innovations allow for exact measurement and monitoring of human body variables according to the individual's needs. They aid in monitoring the body's response to training for peak performance. It allows for the digital dissemination of accurate healthcare data networks. In order to collect and exchange relevant patient data, every equipment is online.

Details

Digitalization, Sustainable Development, and Industry 5.0
Type: Book
ISBN: 978-1-83753-191-2

Keywords

Article
Publication date: 26 September 2022

Tulsi Pawan Fowdur and Lavesh Babooram

The purpose of this paper is geared towards the capture and analysis of network traffic using an array ofmachine learning (ML) and deep learning (DL) techniques to classify…

59

Abstract

Purpose

The purpose of this paper is geared towards the capture and analysis of network traffic using an array ofmachine learning (ML) and deep learning (DL) techniques to classify network traffic into different classes and predict network traffic parameters.

Design/methodology/approach

The classifier models include k-nearest neighbour (KNN), multilayer perceptron (MLP) and support vector machine (SVM), while the regression models studied are multiple linear regression (MLR) as well as MLP. The analytics were performed on both a local server and a servlet hosted on the international business machines cloud. Moreover, the local server could aggregate data from multiple devices on the network and perform collaborative ML to predict network parameters. With optimised hyperparameters, analytical models were incorporated in the cloud hosted Java servlets that operate on a client–server basis where the back-end communicates with Cloudant databases.

Findings

Regarding classification, it was found that KNN performs significantly better than MLP and SVM with a comparative precision gain of approximately 7%, when classifying both Wi-Fi and long term evolution (LTE) traffic.

Originality/value

Collaborative regression models using traffic collected from two devices were experimented and resulted in an increased average accuracy of 0.50% for all variables, with a multivariate MLP model.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Open Access
Article
Publication date: 7 June 2023

Ping Li, Yi Liu and Sai Shao

This paper aims to provide top-level design and basic platform for intelligent application in China high-speed railway.

Abstract

Purpose

This paper aims to provide top-level design and basic platform for intelligent application in China high-speed railway.

Design/methodology/approach

Based on the analysis for the future development trends of world railway, combined with the actual development needs in China high-speed railway, The definition and scientific connotation of intelligent high-speed railway (IHSR) are given at first, and then the system architecture of IHSR are outlined, including 1 basic platform, 3 business sectors, 10 business fields, and 18 innovative applications. At last, a basic platform with cloud edge integration for IHSR is designed.

Findings

The rationality, feasibility and implementability of the system architecture of IHSR have been verified on and applied to the Beijing–Zhangjiakou high-speed railway, providing important support for the construction and operation of the world’s first IHSR.

Originality/value

This paper systematically gives the definition and connotation of the IHSR and put forward the system architecture of IHSR for first time. It will play the most important role in the design, construction and operation of IHSR.

Details

Railway Sciences, vol. 2 no. 2
Type: Research Article
ISSN: 2755-0907

Keywords

1 – 10 of over 16000