Search results

1 – 7 of 7
Article
Publication date: 18 August 2021

Jameel Ahamed, Roohie Naaz Mir and Mohammad Ahsan Chishti

A huge amount of diverse data is generated in the Internet of Things (IoT) because of heterogeneous devices like sensors, actuators, gateways and many more. Due to assorted nature…

Abstract

Purpose

A huge amount of diverse data is generated in the Internet of Things (IoT) because of heterogeneous devices like sensors, actuators, gateways and many more. Due to assorted nature of devices, interoperability remains a major challenge for IoT system developers. The purpose of this study is to use mapping techniques for converting relational database (RDB) to resource directory framework (RDF) for the development of ontology. Ontology helps in achieving semantic interoperability in application areas of IoT which results in shared/common understanding of the heterogeneous data generated by the diverse devices used in health-care domain.

Design/methodology/approach

To overcome the issue of semantic interoperability in healthcare domain, the authors developed an ontology for patients having cardio vascular diseases. Patients located at any place around the world can be diagnosed by Heart Experts located at another place by using this approach. This mechanism deals with the mapping of heterogeneous data into the RDF format in an integrated and interoperable manner. This approach is used to integrate the diverse data of heart patients needed for diagnosis with respect to cardio vascular diseases. This approach is also applicable in other fields where IoT is mostly used.

Findings

Experimental results showed that the RDF works better than the relational database for semantic interoperability in the IoT. This concept-based approach is better than key-based approach and reduces the computation time and storage of the data.

Originality/value

The proposed approach helps in overcoming the demerits of relational database like standardization, expressivity, provenance and supports SPARQL. Therefore, it helps to overcome the heterogeneity, thereby enabling the semantic interoperability in IoT.

Details

International Journal of Pervasive Computing and Communications, vol. 17 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 26 March 2021

Azra Nazir, Roohie Naaz Mir and Shaima Qureshi

Natural languages have a fundamental quality of suppleness that makes it possible to present a single idea in plenty of different ways. This feature is often exploited in the…

Abstract

Purpose

Natural languages have a fundamental quality of suppleness that makes it possible to present a single idea in plenty of different ways. This feature is often exploited in the academic world, leading to the theft of work referred to as plagiarism. Many approaches have been put forward to detect such cases based on various text features and grammatical structures of languages. However, there is a huge scope of improvement for detecting intelligent plagiarism.

Design/methodology/approach

To realize this, the paper introduces a hybrid model to detect intelligent plagiarism by breaking the entire process into three stages: (1) clustering, (2) vector formulation in each cluster based on semantic roles, normalization and similarity index calculation and (3) Summary generation using encoder-decoder. An effective weighing scheme has been introduced to select terms used to build vectors based on K-means, which is calculated on the synonym set for the said term. If the value calculated in the last stage lies above a predefined threshold, only then the next semantic argument is analyzed. When the similarity score for two documents is beyond the threshold, a short summary for plagiarized documents is created.

Findings

Experimental results show that this method is able to detect connotation and concealment used in idea plagiarism besides detecting literal plagiarism.

Originality/value

The proposed model can help academics stay updated by providing summaries of relevant articles. It would eliminate the practice of plagiarism infesting the academic community at an unprecedented pace. The model will also accelerate the process of reviewing academic documents, aiding in the speedy publishing of research articles.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 28 May 2019

Omerah Yousuf and Roohie Naaz Mir

Internet of Things (IoT) is a challenging and promising system concept and requires new types of architectures and protocols compared to traditional networks. Security is an…

1843

Abstract

Purpose

Internet of Things (IoT) is a challenging and promising system concept and requires new types of architectures and protocols compared to traditional networks. Security is an extremely critical issue for IoT that needs to be addressed efficiently. Heterogeneity being an inherent characteristic of IoT gives rise to many security issues that need to be addressed from the perspective of new architectures such as software defined networking, cryptographic algorithms, federated cloud and edge computing.

Design/methodology/approach

The paper analyzes the IoT security from three perspectives: three-layer security architecture, security issues at each layer and security countermeasures. The paper reviews the current state of the art, protocols and technologies used at each layer of security architecture. The paper focuses on various types of attacks that occur at each layer and provides the various approaches used to countermeasure such type of attacks.

Findings

The data exchanged between the different devices or applications in the IoT environment are quite sensitive; thus, the security aspect plays a key role and needs to be addressed efficiently. This indicates the urgent needs of developing general security policy and standards for IoT products. The efficient security architecture needs to be imposed but not at the cost of efficiency and scalability. The paper provides empirical insights about how the different security threats at each layer can be mitigated.

Originality/value

The paper fulfills the need of having an extensive and elaborated survey in the field of IoT security, along with suggesting the countermeasures to mitigate the threats occurring at each level of IoT protocol stack.

Details

Information & Computer Security, vol. 27 no. 2
Type: Research Article
ISSN: 2056-4961

Keywords

Article
Publication date: 3 July 2020

Mohammad Khalid Pandit, Roohie Naaz Mir and Mohammad Ahsan Chishti

The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational…

Abstract

Purpose

The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the communication costs during distributed execution.

Design/methodology/approach

To realize this, the authors proposed a two-level neural network (NN)-based task scheduling system, where the first-level NN (feed-forward neural network/convolutional neural network [FFNN/CNN]) determines whether the data stream could be analyzed (executed) in the resource-constrained environment (edge/fog) or be directly forwarded to the cloud. The second-level NN ( RL module) schedules all the tasks sent by level 1 NN to fog layer, among the available fog devices. This real-time task assignment policy is used to minimize the total computational latency (makespan) as well as communication costs.

Findings

Experimental results indicated that the RL technique works better than the computationally infeasible greedy approach for task scheduling and the combination of RL and task clustering algorithm reduces the communication costs significantly.

Originality/value

The proposed algorithm fundamentally solves the problem of task scheduling in real-time fog-based IoT with best resource utilization, minimum makespan and minimum communication cost between the tasks.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 3 July 2020

Azra Nazir, Roohie Naaz Mir and Shaima Qureshi

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud…

276

Abstract

Purpose

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.

Design/methodology/approach

This review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.

Findings

DL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.

Originality/value

To the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 10 March 2021

Afshan Amin Khan, Roohie Naaz Mir and Najeeb-Ud Din

This work focused on a basic building block of an allocation unit that carries out the critical job of deciding between the conflicting requests, i.e. an arbiter unit. The purpose…

Abstract

Purpose

This work focused on a basic building block of an allocation unit that carries out the critical job of deciding between the conflicting requests, i.e. an arbiter unit. The purpose of this work is to implement an improved hybrid arbiter while harnessing the basic advantages of a matrix arbiter.

Design/methodology/approach

The basic approach of the design methodology involves the extraction of traffic information from buffer signals of each port. As the traffic arrives in the buffer of respective ports, information from these buffers acts as a source of differentiation between the ports receiving low traffic rates and ports receiving high traffic rates. A logic circuit is devised that enables an arbiter to dynamically assign priorities to different ports based on the information from buffers. For implementation and verification of the proposed design, a two-stage approach was used. Stage I comprises comparing the proposed arbiter with other arbiters in the literature using Vivado integrated design environment platform. Stage II demonstrates the implementation of the proposed design in Cadence design environment for application-specific integrated chip level implementation. By using such a strategy, this study aims to have a special focus on the feasibility of the design for very large-scale integration implementation.

Findings

According to the simulation results, the proposed hybrid arbiter maintains the advantage of a basic matrix arbiter and also possesses the additional feature of fault-tolerant traffic awareness. These features for a hybrid arbiter are achieved with a 19% increase in throughput, a 1.5% decrease in delay and a 19% area increase in comparison to a conventional matrix arbiter.

Originality/value

This paper proposes a traffic-aware mechanism that increases the throughput of an arbiter unit with some area trade-off. The key feature of this hybrid arbiter is that it can assign priorities to the requesting ports based upon the real-time traffic requirements of each port. As a result of this, the arbiter is dynamically able to make arbitration decisions. Now because buffer information is valuable in winning the priority, the presence of a fault-tolerant policy ensures that none of the priority is assigned falsely to a requesting port. By this, wastage of arbitration cycles is avoided and an increase in throughput is also achieved.

Article
Publication date: 10 February 2022

Jameel Ahamed, Roohie Naaz Mir and Mohammad Ahsan Chishti

The world is shifting towards the fourth industrial revolution (Industry 4.0), symbolising the move to digital, fully automated habitats and cyber-physical systems. Industry 4.0…

Abstract

Purpose

The world is shifting towards the fourth industrial revolution (Industry 4.0), symbolising the move to digital, fully automated habitats and cyber-physical systems. Industry 4.0 consists of innovative ideas and techniques in almost all sectors, including Smart health care, which recommends technologies and mechanisms for early prediction of life-threatening diseases. Cardiovascular disease (CVD), which includes stroke, is one of the world’s leading causes of sickness and deaths. As per the American Heart Association, CVDs are a leading cause of death globally, and it is believed that COVID-19 also influenced the health of cardiovascular and the number of patients increases as a result. Early detection of such diseases is one of the solutions for a lower mortality rate. In this work, early prediction models for CVDs are developed with the help of machine learning (ML), a form of artificial intelligence that allows computers to learn and improve on their own without requiring to be explicitly programmed.

Design/methodology/approach

The proposed CVD prediction models are implemented with the help of ML techniques, namely, decision tree, random forest, k-nearest neighbours, support vector machine, logistic regression, AdaBoost and gradient boosting. To mitigate the effect of over-fitting and under-fitting problems, hyperparameter optimisation techniques are used to develop efficient disease prediction models. Furthermore, the ensemble technique using soft voting is also used to gain more insight into the data set and accurate prediction models.

Findings

The models were developed to help the health-care providers with the early diagnosis and prediction of heart disease patients, reducing the risk of developing severe diseases. The created heart disease risk evaluation model is built on the Jupyter Notebook Web application, and its performance is calculated using unbiased indicators such as true positive rate, true negative rate, accuracy, precision, misclassification rate, area under the ROC curve and cross-validation approach. The results revealed that the ensemble heart disease model outperforms the other proposed and implemented models.

Originality/value

The proposed and developed CVD prediction models aims at predicting CVDs at an early stage, thereby taking prevention and precautionary measures at a very early stage of the disease to abate the predictive maintenance as recommended in Industry 4.0. Prediction models are developed on algorithms’ default values, hyperparameter optimisations and ensemble techniques.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 7 of 7