Search results

1 – 10 of 195
Article
Publication date: 19 May 2022

Priyanka Kumari Bhansali, Dilendra Hiran, Hemant Kothari and Kamal Gulati

The purpose of this paper Computing is a recent emerging cloud model that affords clients limitless facilities, lowers the rate of customer storing and computation and progresses…

Abstract

Purpose

The purpose of this paper Computing is a recent emerging cloud model that affords clients limitless facilities, lowers the rate of customer storing and computation and progresses the ease of use, leading to a surge in the number of enterprises and individuals storing data in the cloud. Cloud services are used by various organizations (education, medical and commercial) to store their data. In the health-care industry, for example, patient medical data is outsourced to a cloud server. Instead of relying onmedical service providers, clients can access theirmedical data over the cloud.

Design/methodology/approach

This section explains the proposed cloud-based health-care system for secure data storage and access control called hash-based ciphertext policy attribute-based encryption with signature (hCP-ABES). It provides access control with finer granularity, security, authentication and user confidentiality of medical data. It enhances ciphertext-policy attribute-based encryption (CP-ABE) with hashing, encryption and signature. The proposed architecture includes protection mechanisms to guarantee that health-care and medical information can be securely exchanged between health systems via the cloud. Figure 2 depicts the proposed work's architectural design.

Findings

For health-care-related applications, safe contact with common documents hosted on a cloud server is becoming increasingly important. However, there are numerous constraints to designing an effective and safe data access method, including cloud server performance, a high number of data users and various security requirements. This work adds hashing and signature to the classic CP-ABE technique. It protects the confidentiality of health-care data while also allowing for fine-grained access control. According to an analysis of security needs, this work fulfills the privacy and integrity of health information using federated learning.

Originality/value

The Internet of Things (IoT) technology and smart diagnostic implants have enhanced health-care systems by allowing for remote access and screening of patients’ health issues at any time and from any location. Medical IoT devices monitor patients’ health status and combine this information into medical records, which are then transferred to the cloud and viewed by health providers for decision-making. However, when it comes to information transfer, the security and secrecy of electronic health records become a major concern. This work offers effective data storage and access control for a smart healthcare system to protect confidentiality. CP-ABE ensures data confidentiality and also allows control on data access at a finer level. Furthermore, it allows owners to set up a dynamic patients health data sharing policy under the cloud layer. hCP-ABES proposed fine-grained data access, security, authentication and user privacy of medical data. This paper enhances CP-ABE with hashing, encryption and signature. The proposed method has been evaluated, and the results signify that the proposed hCP-ABES is feasible compared to other access control schemes using federated learning.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 27 February 2024

Shefali Arora, Ruchi Mittal, Avinash K. Shrivastava and Shivani Bali

Deep learning (DL) is on the rise because it can make predictions and judgments based on data that is unseen. Blockchain technologies are being combined with DL frameworks in…

Abstract

Purpose

Deep learning (DL) is on the rise because it can make predictions and judgments based on data that is unseen. Blockchain technologies are being combined with DL frameworks in various industries to provide a safe and effective infrastructure. The review comprises literature that lists the most recent techniques used in the aforementioned application sectors. We examine the current research trends across several fields and evaluate the literature in terms of its advantages and disadvantages.

Design/methodology/approach

The integration of blockchain and DL has been explored in several application domains for the past five years (2018–2023). Our research is guided by five research questions, and based on these questions, we concentrate on key application domains such as the usage of Internet of Things (IoT) in several applications, healthcare and cryptocurrency price prediction. We have analyzed the main challenges and possibilities concerning blockchain technologies. We have discussed the methodologies used in the pertinent publications in these areas and contrasted the research trends during the previous five years. Additionally, we provide a comparison of the widely used blockchain frameworks that are used to create blockchain-based DL frameworks.

Findings

By responding to five research objectives, the study highlights and assesses the effectiveness of already published works using blockchain and DL. Our findings indicate that IoT applications, such as their use in smart cities and cars, healthcare and cryptocurrency, are the key areas of research. The primary focus of current research is the enhancement of existing systems, with data analysis, storage and sharing via decentralized systems being the main motivation for this integration. Amongst the various frameworks employed, Ethereum and Hyperledger are popular among researchers in the domain of IoT and healthcare, whereas Bitcoin is popular for research on cryptocurrency.

Originality/value

There is a lack of literature that summarizes the state-of-the-art methods incorporating blockchain and DL in popular domains such as healthcare, IoT and cryptocurrency price prediction. We analyze the existing research done in the past five years (2018–2023) to review the issues and emerging trends.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 8
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 28 February 2023

Tulsi Pawan Fowdur, M.A.N. Shaikh Abdoolla and Lokeshwar Doobur

The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality…

Abstract

Purpose

The purpose of this paper is to perform a comparative analysis of the delay associated in running two real-time machine learning-based applications, namely, a video quality assessment (VQA) and a phishing detection application by using the edge, fog and cloud computing paradigms.

Design/methodology/approach

The VQA algorithm was developed using Android Studio and run on a mobile phone for the edge paradigm. For the fog paradigm, it was hosted on a Java server and for the cloud paradigm on the IBM and Firebase clouds. The phishing detection algorithm was embedded into a browser extension for the edge paradigm. For the fog paradigm, it was hosted on a Node.js server and for the cloud paradigm on Firebase.

Findings

For the VQA algorithm, the edge paradigm had the highest response time while the cloud paradigm had the lowest, as the algorithm was computationally intensive. For the phishing detection algorithm, the edge paradigm had the lowest response time, and the cloud paradigm had the highest, as the algorithm had a low computational complexity. Since the determining factor for the response time was the latency, the edge paradigm provided the smallest delay as all processing were local.

Research limitations/implications

The main limitation of this work is that the experiments were performed on a small scale due to time and budget constraints.

Originality/value

A detailed analysis with real applications has been provided to show how the complexity of an application can determine the best computing paradigm on which it can be deployed.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 28 June 2024

Zhiwei Qi, Tong Lu, Kun Yue and Liang Duan

This paper aims to propose an incremental graph indexing method based on probabilistic inferences in Bayesian network (BN) for approximate nearest neighbor search (ANNS) that adds…

Abstract

Purpose

This paper aims to propose an incremental graph indexing method based on probabilistic inferences in Bayesian network (BN) for approximate nearest neighbor search (ANNS) that adds unindexed queries into the graph index incrementally.

Design/methodology/approach

This paper first uses the attention mechanism based graph convolutional network to embed a social network into the low-dimensional vector space, which could improve the efficiency of graph index construction. To add the unindexed queries into the graph index incrementally, this study proposes to learn the rule-based BN from social interactions. Thus, the dependency relations of unindexed queries and their neighbors are represented, and the probabilistic inferences in BN are then performed.

Findings

Experimental results demonstrate that the proposed method improves the search precision by at least 5% and search efficiency by 10% compared to the state-of-the-art methods.

Originality/value

This paper proposes a novel method to construct the incremental graph index based on probabilistic inferences in BN, such that both indexed and unindexed queries in ANNS could be addressed efficiently.

Details

International Journal of Web Information Systems, vol. 20 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 25 April 2024

Tulsi Pawan Fowdur and Ashven Sanghan

The purpose of this paper is to develop a blockchain-based data capture and transmission system that will collect real-time power consumption data from a household electrical…

Abstract

Purpose

The purpose of this paper is to develop a blockchain-based data capture and transmission system that will collect real-time power consumption data from a household electrical appliance and transfer it securely to a local server for energy analytics such as forecasting.

Design/methodology/approach

The data capture system is composed of two current transformer (CT) sensors connected to two different electrical appliances. The CT sensors send the power readings to two Arduino microcontrollers which in turn connect to a Raspberry-Pi for aggregating the data. Blockchain is then enabled onto the Raspberry-Pi through a Java API so that the data are transmitted securely to a server. The server provides real-time visualization of the data as well as prediction using the multi-layer perceptron (MLP) and long short term memory (LSTM) algorithms.

Findings

The results for the blockchain analysis demonstrate that when the data readings are transmitted in smaller blocks, the security is much greater as compared with blocks of larger size. To assess the accuracy of the prediction algorithms data were collected for a 20 min interval to train the model and the algorithms were evaluated using the sliding window approach. The mean average percentage error (MAPE) was used to assess the accuracy of the algorithms and a MAPE of 1.62% and 1.99% was obtained for the LSTM and MLP algorithms, respectively.

Originality/value

A detailed performance analysis of the blockchain-based transmission model using time complexity, throughput and latency as well as energy forecasting has been performed.

Details

Sensor Review, vol. 44 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 13 September 2022

Haixiao Dai, Phong Lam Nguyen and Cat Kutay

Digital learning systems are crucial for education and data collected can analyse students learning performances to improve support. The purpose of this study is to design and…

Abstract

Purpose

Digital learning systems are crucial for education and data collected can analyse students learning performances to improve support. The purpose of this study is to design and build an asynchronous hardware and software system that can store data on a local device until able to share. It was developed for staff and students at university who are using the limited internet access in areas such as remote Northern Territory. This system can asynchronously link the users’ devices and the central server at the university using unstable internet.

Design/methodology/approach

A Learning Box has been build based on minicomputer and a web learning management system (LMS). This study presents different options to create such a system and discusses various approaches for data syncing. The structure of the final setup is a Moodle (Modular Object Oriented Developmental Learning Environment) LMS on a Raspberry Pi which provides a Wi-Fi hotspot. The authors worked with lecturers from X University who work in remote Northern Territory regions to test this and provide feedback. This study also considered suitable data collection and techniques that can be used to analyse the available data to support learning analysis by the staff. This research focuses on building an asynchronous hardware and software system that can store data on a local device until able to share. It was developed for staff and students at university who are using the limited internet access in areas such as remote Northern Territory. This system can asynchronously link the users’ devices and the central server at the university using unstable internet. Digital learning systems are crucial for education, and data collected can analyse students learning performances to improve support.

Findings

The resultant system has been tested in various scenarios to ensure it is robust when students’ submissions are collected. Furthermore, issues around student familiarity and ability to use online systems have been considered due to early feedback.

Research limitations/implications

Monitoring asynchronous collaborative learning systems through analytics can assist students learning in their own time. Learning Hubs can be easily set up and maintained using micro-computers now easily available. A phone interface is sufficient for learning when video and audio submissions are supported in the LMS.

Practical implications

This study shows digital learning can be implemented in an offline environment by using a Raspberry Pi as LMS server. Offline collaborative learning in remote communities can be achieved by applying asynchronized data syncing techniques. Also asynchronized data syncing can be reliably achieved by using change logs and incremental syncing technique.

Social implications

Focus on audio and video submission allows engagement in higher education by students with lower literacy but higher practice skills. Curriculum that clearly supports the level of learning required for a job needs to be developed, and the assumption that literacy is part of the skilled job in the workplace needs to be removed.

Originality/value

To the best of the authors’ knowledge, this is the first remote asynchronous collaborative LMS environment that has been implemented. This provides the hardware and software for opportunities to share learning remotely. Material to support low literacy students is also included.

Details

Interactive Technology and Smart Education, vol. 21 no. 1
Type: Research Article
ISSN: 1741-5659

Keywords

Book part
Publication date: 1 July 2024

Paul Dylan-Ennis

A Web3 lifeworld consists of an imaginary and a shared commons. A Web3 imaginary is shown to include most, if not all, of the following: (i) the stated goal or purpose of the…

Abstract

A Web3 lifeworld consists of an imaginary and a shared commons. A Web3 imaginary is shown to include most, if not all, of the following: (i) the stated goal or purpose of the community, (ii) the behavioral norms, (iii) the lore or history, and (iv) what is opposed. A typical Web3 commons is shown to involve three elements: hash (technical), bash (social) and cash (finance). When changes come in Web3, the response is enacted using an available lever from the hash, bash, cash model of decentralized organization, but the response must not be in friction with the community’s imaginary, or it will most likely grind to a halt. Effective response to change becomes part of the Web3 lifeworld’s toolkit.

Details

Defining Web3: A Guide to the New Cultural Economy
Type: Book
ISBN: 978-1-83549-600-8

Keywords

Article
Publication date: 22 March 2022

Shiva Sumanth Reddy and C. Nandini

The present research work is carried out for determining haemoprotozoan diseases in cattle and breast cancer diseases in humans at early stage. The combination of LeNet and…

Abstract

Purpose

The present research work is carried out for determining haemoprotozoan diseases in cattle and breast cancer diseases in humans at early stage. The combination of LeNet and bidirectional long short-term memory (Bi-LSTM) model is used for the classification of heamoprotazoan samples into three classes such as theileriosis, babesiosis and anaplasmosis. Also, BreaKHis dataset image samples are classified into two major classes as malignant and benign. The hyperparameter optimization is used for selecting the prominent features. The main objective of this approach is to overcome the manual identification and classification of samples into different haemoprotozoan diseases in cattle. The traditional laboratory approach of identification is time-consuming and requires human expertise. The proposed methodology will help to identify and classify the heamoprotozoan disease in early stage without much of human involvement.

Design/methodology/approach

LeNet-based Bi-LSTM model is used for the classification of pathology images into babesiosis, anaplasmosis, theileriosis and breast images classified into malignant or benign. An optimization-based super pixel clustering algorithm is used for segmentation once the normalization of histopathology images is conducted. The edge information in the normalized images is considered for identifying the irregular shape regions of images, which are structurally meaningful. Also, it is compared with another segmentation approach circular Hough Transform (CHT). The CHT is used to separate the nuclei from non-nuclei. The Canny edge detection and gaussian filter is used for extracting the edges before sending to CHT.

Findings

The existing methods such as artificial neural network (ANN), convolution neural network (CNN), recurrent neural network (RNN), LSTM and Bi-LSTM model have been compared with the proposed hyperparameter optimization approach with LeNET and Bi-LSTM. The results obtained by the proposed hyperparameter optimization-Bi-LSTM model showed the accuracy of 98.99% when compared to existing models like Ensemble of Deep Learning Models of 95.29% and Modified ReliefF Algorithm of 95.94%.

Originality/value

In contrast to earlier research done using Modified ReliefF, the suggested LeNet with Bi-LSTM model, there is an improvement in accuracy, precision and F-score significantly. The real time data set is used for the heamoprotozoan disease samples. Also, for anaplasmosis and babesiosis, the second set of datasets were used which are coloured datasets obtained by adding a chemical acetone and stain.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 15 August 2023

Walaa AlKhader, Raja Jayaraman, Khaled Salah, Andrei Sleptchenko, Jiju Antony and Mohammed Omar

Quality 4.0 (Q4.0) leverages new emerging technologies to achieve operational excellence and enhance performance. Implementing Q4.0 in digital manufacturing can bring about…

Abstract

Purpose

Quality 4.0 (Q4.0) leverages new emerging technologies to achieve operational excellence and enhance performance. Implementing Q4.0 in digital manufacturing can bring about reliable, flexible and decentralized manufacturing. Emerging technologies such as Non-Fungible Tokens (NFTs), Blockchain and Interplanetary File Storage (IPFS) can all be utilized to realize Q4.0 in digital manufacturing. NFTs, for instance, can provide traceability and property ownership management and protection. Blockchain provides secure and verifiable transactions in a manner that is trusted, immutable and tamper-proof. This research paper aims to explore the concept of Q4.0 within digital manufacturing systems and provide a novel solution based on Blockchain and NFTs for implementing Q4.0 in digital manufacturing.

Design/methodology/approach

This study reviews the relevant literature and presents a detailed system architecture, along with a sequence diagram that demonstrates the interactions between the various participants. To implement a prototype of the authors' system, the authors next develop multiple Ethereum smart contracts and test the algorithms designed. Then, the efficacy of the proposed system is validated through an evaluation of its cost-effectiveness and security parameters. Finally, this research provides other potential applications and scenarios across diverse industries.

Findings

The proposed solution's smart contracts governing the transactions among the participants were implemented successfully. Furthermore, the authors' analysis indicates that the authors' solution is cost-effective and resilient against commonly known security attacks.

Research limitations/implications

This study represents a pioneering endeavor in the exploration of the potential applications of NFTs and blockchain in the attainment of a comprehensive quality framework (Q4.0) in digital manufacturing. Presently, the body of research on quality control or assurance in digital manufacturing is limited in scope, primarily focusing on the products and production processes themselves. However, this study examines the other vital elements, including management, leadership and intra- and inter-organizational relationships, which are essential for manufacturers to achieve superior performance and optimal manufacturing outcomes.

Practical implications

To facilitate the achievement of Q4.0 and empower manufacturers to attain outstanding quality and gain significant competitive advantages, the authors propose the integration of Blockchain and NFTs into the digital manufacturing framework, with all related processes aligned with an organization's strategic and leadership objectives.

Originality/value

This study represents a pioneering endeavor in the exploration of the potential applications of NFTs and blockchain in the attainment of a comprehensive quality framework (Quality 4.0) in digital manufacturing. Presently, the body of research on quality control or assurance in digital manufacturing is limited in scope, primarily focusing on the products and production processes themselves. However, this study examines the other vital elements, including management, leadership and intra- and inter-organizational relationships, which are essential for manufacturers to achieve superior performance and optimal manufacturing outcomes.

Details

Journal of Manufacturing Technology Management, vol. 34 no. 7
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 17 June 2021

Ambica Ghai, Pradeep Kumar and Samrat Gupta

Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered…

1386

Abstract

Purpose

Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered with to influence public opinion. Since the consumers of online information (misinformation) tend to trust the content when the image(s) supplement the text, image manipulation software is increasingly being used to forge the images. To address the crucial problem of image manipulation, this study focusses on developing a deep-learning-based image forgery detection framework.

Design/methodology/approach

The proposed deep-learning-based framework aims to detect images forged using copy-move and splicing techniques. The image transformation technique aids the identification of relevant features for the network to train effectively. After that, the pre-trained customized convolutional neural network is used to train on the public benchmark datasets, and the performance is evaluated on the test dataset using various parameters.

Findings

The comparative analysis of image transformation techniques and experiments conducted on benchmark datasets from a variety of socio-cultural domains establishes the effectiveness and viability of the proposed framework. These findings affirm the potential applicability of proposed framework in real-time image forgery detection.

Research limitations/implications

This study bears implications for several important aspects of research on image forgery detection. First this research adds to recent discussion on feature extraction and learning for image forgery detection. While prior research on image forgery detection, hand-crafted the features, the proposed solution contributes to stream of literature that automatically learns the features and classify the images. Second, this research contributes to ongoing effort in curtailing the spread of misinformation using images. The extant literature on spread of misinformation has prominently focussed on textual data shared over social media platforms. The study addresses the call for greater emphasis on the development of robust image transformation techniques.

Practical implications

This study carries important practical implications for various domains such as forensic sciences, media and journalism where image data is increasingly being used to make inferences. The integration of image forgery detection tools can be helpful in determining the credibility of the article or post before it is shared over the Internet. The content shared over the Internet by the users has become an important component of news reporting. The framework proposed in this paper can be further extended and trained on more annotated real-world data so as to function as a tool for fact-checkers.

Social implications

In the current scenario wherein most of the image forgery detection studies attempt to assess whether the image is real or forged in an offline mode, it is crucial to identify any trending or potential forged image as early as possible. By learning from historical data, the proposed framework can aid in early prediction of forged images to detect the newly emerging forged images even before they occur. In summary, the proposed framework has a potential to mitigate physical spreading and psychological impact of forged images on social media.

Originality/value

This study focusses on copy-move and splicing techniques while integrating transfer learning concepts to classify forged images with high accuracy. The synergistic use of hitherto little explored image transformation techniques and customized convolutional neural network helps design a robust image forgery detection framework. Experiments and findings establish that the proposed framework accurately classifies forged images, thus mitigating the negative socio-cultural spread of misinformation.

Details

Information Technology & People, vol. 37 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

1 – 10 of 195