International Journal of Pervasive Computing and CommunicationsTable of Contents for International Journal of Pervasive Computing and Communications. List of articles from the current issue, including Just Accepted (EarlyCite)https://www.emerald.com/insight/publication/issn/1742-7371/vol/20/iss/2?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestInternational Journal of Pervasive Computing and CommunicationsEmerald Publishing LimitedInternational Journal of Pervasive Computing and CommunicationsInternational Journal of Pervasive Computing and Communicationshttps://www.emerald.com/insight/proxy/containerImg?link=/resource/publication/journal/491bf146e38168d7dcf3dc05bf30821e/urn:emeraldgroup.com:asset:id:binary:ijpcc.cover.jpghttps://www.emerald.com/insight/publication/issn/1742-7371/vol/20/iss/2?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestTrusted routing protocol for federated UAV networkhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-01-2022-0011/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestIn the final step, the trust model is applied to the on-demand federated multipath distance vector routing protocol (AOMDV) to introduce path trust as a foundation for routing selection in the route discovery phase, construct a trusted path, and implement a path warning mechanism to detect malicious nodes in the route maintenance phase, respectively. A trust-based on-demand multipath distance vector routing protocol is being developed to address the problem of flying ad-hoc network being subjected to internal attacks and experiencing frequent connection interruptions. Following the construction of the node trust assessment model and the presentation of trust evaluation criteria, the data packet forwarding rate, trusted interaction degree and detection packet receipt rate are discussed. In the next step, the direct trust degree of the adaptive fuzzy trust aggregation network compute node is constructed. After then, rely on the indirect trust degree of neighbouring nodes to calculate the trust degree of the node in the network. Design a trust fluctuation penalty mechanism, as a second step, to defend against the switch attack in the trust model. When compared to the lightweight trust-enhanced routing protocol (TEAOMDV), it significantly improves the data packet delivery rate and throughput of the network significantly. Additionally, it reduces the amount of routing overhead and the average end-to-end delay.Trusted routing protocol for federated UAV network
Elham Kariri, Kusum Yadav
International Journal of Pervasive Computing and Communications, Vol. 20, No. 2, pp.193-205

In the final step, the trust model is applied to the on-demand federated multipath distance vector routing protocol (AOMDV) to introduce path trust as a foundation for routing selection in the route discovery phase, construct a trusted path, and implement a path warning mechanism to detect malicious nodes in the route maintenance phase, respectively.

A trust-based on-demand multipath distance vector routing protocol is being developed to address the problem of flying ad-hoc network being subjected to internal attacks and experiencing frequent connection interruptions. Following the construction of the node trust assessment model and the presentation of trust evaluation criteria, the data packet forwarding rate, trusted interaction degree and detection packet receipt rate are discussed. In the next step, the direct trust degree of the adaptive fuzzy trust aggregation network compute node is constructed. After then, rely on the indirect trust degree of neighbouring nodes to calculate the trust degree of the node in the network. Design a trust fluctuation penalty mechanism, as a second step, to defend against the switch attack in the trust model.

When compared to the lightweight trust-enhanced routing protocol (TEAOMDV), it significantly improves the data packet delivery rate and throughput of the network significantly.

Additionally, it reduces the amount of routing overhead and the average end-to-end delay.

]]>
Trusted routing protocol for federated UAV network10.1108/IJPCC-01-2022-0011International Journal of Pervasive Computing and Communications2022-04-26© 2022 Emerald Publishing LimitedElham KaririKusum YadavInternational Journal of Pervasive Computing and Communications2022022-04-2610.1108/IJPCC-01-2022-0011https://www.emerald.com/insight/content/doi/10.1108/IJPCC-01-2022-0011/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
A new federated genetic algorithm-based optimization technique for multi-criteria vehicle route planning using ArcGIS network analysthttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0082/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestUsing a real-time road network combined with historical traffic data for Al-Salt city, the paper aims to propose a new federated genetic algorithm (GA)-based optimization technique to solve the dynamic vehicle routing problem. Using a GA solver, the estimated routing time for 300 chromosomes (routes) was the shortest and most efficient over 30 generations. In transportation systems, the objective of route planning techniques has been revised from focusing on road directors to road users. As a result, the new transportation systems use advanced technologies to support drivers and provide them with the road information they need and the services they require to reduce traffic congestion and improve routing problems. In recent decades, numerous studies have been conducted on how to find an efficient and suitable route for vehicles, known as the vehicle routing problem (VRP). To identify the best route, VRP uses real-time information-acquired geographical information systems (GIS) tools. This study aims to develop a route planning tool using ArcGIS network analyst to enhance both cost and service quality measures, taking into account several factors to determine the best route based on the users’ preferences. Furthermore, developing a route planning tool using ArcGIS network analyst to enhance both cost and service quality measures, taking into account several factors to determine the best route based on the users’ preferences. An adaptive genetic algorithm (GA) is used to determine the optimal time route, taking into account factors that affect vehicle arrival times and cause delays. In addition, ArcGIS' Network Analyst tool is used to determine the best route based on the user's preferences using a real-time map.A new federated genetic algorithm-based optimization technique for multi-criteria vehicle route planning using ArcGIS network analyst
Da’ad Ahmad Albalawneh, M.A. Mohamed
International Journal of Pervasive Computing and Communications, Vol. 20, No. 2, pp.206-227

Using a real-time road network combined with historical traffic data for Al-Salt city, the paper aims to propose a new federated genetic algorithm (GA)-based optimization technique to solve the dynamic vehicle routing problem. Using a GA solver, the estimated routing time for 300 chromosomes (routes) was the shortest and most efficient over 30 generations.

In transportation systems, the objective of route planning techniques has been revised from focusing on road directors to road users. As a result, the new transportation systems use advanced technologies to support drivers and provide them with the road information they need and the services they require to reduce traffic congestion and improve routing problems. In recent decades, numerous studies have been conducted on how to find an efficient and suitable route for vehicles, known as the vehicle routing problem (VRP). To identify the best route, VRP uses real-time information-acquired geographical information systems (GIS) tools.

This study aims to develop a route planning tool using ArcGIS network analyst to enhance both cost and service quality measures, taking into account several factors to determine the best route based on the users’ preferences.

Furthermore, developing a route planning tool using ArcGIS network analyst to enhance both cost and service quality measures, taking into account several factors to determine the best route based on the users’ preferences. An adaptive genetic algorithm (GA) is used to determine the optimal time route, taking into account factors that affect vehicle arrival times and cause delays. In addition, ArcGIS' Network Analyst tool is used to determine the best route based on the user's preferences using a real-time map.

]]>
A new federated genetic algorithm-based optimization technique for multi-criteria vehicle route planning using ArcGIS network analyst10.1108/IJPCC-02-2022-0082International Journal of Pervasive Computing and Communications2022-05-17© 2022 Emerald Publishing LimitedDa’ad Ahmad AlbalawnehM.A. MohamedInternational Journal of Pervasive Computing and Communications2022022-05-1710.1108/IJPCC-02-2022-0082https://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0082/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Cloud-based secure data storage and access control for internet of medical things using federated learninghttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0041/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe purpose of this paper Computing is a recent emerging cloud model that affords clients limitless facilities, lowers the rate of customer storing and computation and progresses the ease of use, leading to a surge in the number of enterprises and individuals storing data in the cloud. Cloud services are used by various organizations (education, medical and commercial) to store their data. In the health-care industry, for example, patient medical data is outsourced to a cloud server. Instead of relying onmedical service providers, clients can access theirmedical data over the cloud. This section explains the proposed cloud-based health-care system for secure data storage and access control called hash-based ciphertext policy attribute-based encryption with signature (hCP-ABES). It provides access control with finer granularity, security, authentication and user confidentiality of medical data. It enhances ciphertext-policy attribute-based encryption (CP-ABE) with hashing, encryption and signature. The proposed architecture includes protection mechanisms to guarantee that health-care and medical information can be securely exchanged between health systems via the cloud. Figure 2 depicts the proposed work's architectural design. For health-care-related applications, safe contact with common documents hosted on a cloud server is becoming increasingly important. However, there are numerous constraints to designing an effective and safe data access method, including cloud server performance, a high number of data users and various security requirements. This work adds hashing and signature to the classic CP-ABE technique. It protects the confidentiality of health-care data while also allowing for fine-grained access control. According to an analysis of security needs, this work fulfills the privacy and integrity of health information using federated learning. The Internet of Things (IoT) technology and smart diagnostic implants have enhanced health-care systems by allowing for remote access and screening of patients’ health issues at any time and from any location. Medical IoT devices monitor patients’ health status and combine this information into medical records, which are then transferred to the cloud and viewed by health providers for decision-making. However, when it comes to information transfer, the security and secrecy of electronic health records become a major concern. This work offers effective data storage and access control for a smart healthcare system to protect confidentiality. CP-ABE ensures data confidentiality and also allows control on data access at a finer level. Furthermore, it allows owners to set up a dynamic patients health data sharing policy under the cloud layer. hCP-ABES proposed fine-grained data access, security, authentication and user privacy of medical data. This paper enhances CP-ABE with hashing, encryption and signature. The proposed method has been evaluated, and the results signify that the proposed hCP-ABES is feasible compared to other access control schemes using federated learning.Cloud-based secure data storage and access control for internet of medical things using federated learning
Priyanka Kumari Bhansali, Dilendra Hiran, Hemant Kothari, Kamal Gulati
International Journal of Pervasive Computing and Communications, Vol. 20, No. 2, pp.228-239

The purpose of this paper Computing is a recent emerging cloud model that affords clients limitless facilities, lowers the rate of customer storing and computation and progresses the ease of use, leading to a surge in the number of enterprises and individuals storing data in the cloud. Cloud services are used by various organizations (education, medical and commercial) to store their data. In the health-care industry, for example, patient medical data is outsourced to a cloud server. Instead of relying onmedical service providers, clients can access theirmedical data over the cloud.

This section explains the proposed cloud-based health-care system for secure data storage and access control called hash-based ciphertext policy attribute-based encryption with signature (hCP-ABES). It provides access control with finer granularity, security, authentication and user confidentiality of medical data. It enhances ciphertext-policy attribute-based encryption (CP-ABE) with hashing, encryption and signature. The proposed architecture includes protection mechanisms to guarantee that health-care and medical information can be securely exchanged between health systems via the cloud. Figure 2 depicts the proposed work's architectural design.

For health-care-related applications, safe contact with common documents hosted on a cloud server is becoming increasingly important. However, there are numerous constraints to designing an effective and safe data access method, including cloud server performance, a high number of data users and various security requirements. This work adds hashing and signature to the classic CP-ABE technique. It protects the confidentiality of health-care data while also allowing for fine-grained access control. According to an analysis of security needs, this work fulfills the privacy and integrity of health information using federated learning.

The Internet of Things (IoT) technology and smart diagnostic implants have enhanced health-care systems by allowing for remote access and screening of patients’ health issues at any time and from any location. Medical IoT devices monitor patients’ health status and combine this information into medical records, which are then transferred to the cloud and viewed by health providers for decision-making. However, when it comes to information transfer, the security and secrecy of electronic health records become a major concern. This work offers effective data storage and access control for a smart healthcare system to protect confidentiality. CP-ABE ensures data confidentiality and also allows control on data access at a finer level. Furthermore, it allows owners to set up a dynamic patients health data sharing policy under the cloud layer. hCP-ABES proposed fine-grained data access, security, authentication and user privacy of medical data. This paper enhances CP-ABE with hashing, encryption and signature. The proposed method has been evaluated, and the results signify that the proposed hCP-ABES is feasible compared to other access control schemes using federated learning.

]]>
Cloud-based secure data storage and access control for internet of medical things using federated learning10.1108/IJPCC-02-2022-0041International Journal of Pervasive Computing and Communications2022-05-19© 2022 Emerald Publishing LimitedPriyanka Kumari BhansaliDilendra HiranHemant KothariKamal GulatiInternational Journal of Pervasive Computing and Communications2022022-05-1910.1108/IJPCC-02-2022-0041https://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0041/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
A new predictive approach for the MAC layer misbehavior in IEEE 802.11 networkshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-08-2022-0303/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestAd hoc mobile networks are commonplace in every aspect of our everyday life. They become essential in many industries and have uses in logistics, science and the military. However, because they operate mostly in open spaces, they are exposed to a variety of dangers. The purpose of this study is to introduce a novel method for detecting the MAC layer misbehavior. The proposed novel approach is based on exponential smoothing for throughput prediction to address this MAC layer misbehavior. The real and expected throughput are processed using an exponential smoothing algorithm to identify this attack, and if these metrics exhibit a trending pattern, an alarm is then sent. The effect of the IEEE 802.11 MAC layer misbehavior on throughput was examined using the NS-2 network simulator, as well as the approval of our novel strategy. The authors have found that a smoothing factor value that is near to 0 provides a very accurate throughput forecast that takes into consideration the recent history of the updated values of the real value. As for the smoothing factor values that are near to 1, they are used to identify MAC layer misbehavior. According to the authors’ modest knowledge, this new scheme has not been proposed in the state of the art for the detection of greedy behavior in mobile ad hoc networks.A new predictive approach for the MAC layer misbehavior in IEEE 802.11 networks
Mohammed-Alamine El Houssaini, Abdellah Nabou, Abdelali Hadir, Souad El Houssaini, Jamal El Kafi
International Journal of Pervasive Computing and Communications, Vol. 20, No. 2, pp.240-261

Ad hoc mobile networks are commonplace in every aspect of our everyday life. They become essential in many industries and have uses in logistics, science and the military. However, because they operate mostly in open spaces, they are exposed to a variety of dangers. The purpose of this study is to introduce a novel method for detecting the MAC layer misbehavior.

The proposed novel approach is based on exponential smoothing for throughput prediction to address this MAC layer misbehavior. The real and expected throughput are processed using an exponential smoothing algorithm to identify this attack, and if these metrics exhibit a trending pattern, an alarm is then sent.

The effect of the IEEE 802.11 MAC layer misbehavior on throughput was examined using the NS-2 network simulator, as well as the approval of our novel strategy. The authors have found that a smoothing factor value that is near to 0 provides a very accurate throughput forecast that takes into consideration the recent history of the updated values of the real value. As for the smoothing factor values that are near to 1, they are used to identify MAC layer misbehavior.

According to the authors’ modest knowledge, this new scheme has not been proposed in the state of the art for the detection of greedy behavior in mobile ad hoc networks.

]]>
A new predictive approach for the MAC layer misbehavior in IEEE 802.11 networks10.1108/IJPCC-08-2022-0303International Journal of Pervasive Computing and Communications2023-07-03© 2023 Emerald Publishing LimitedMohammed-Alamine El HoussainiAbdellah NabouAbdelali HadirSouad El HoussainiJamal El KafiInternational Journal of Pervasive Computing and Communications2022023-07-0310.1108/IJPCC-08-2022-0303https://www.emerald.com/insight/content/doi/10.1108/IJPCC-08-2022-0303/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Novel communication system for buried water pipe monitoring using acoustic signal propagation along the pipehttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-05-2022-0179/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestWireless sensor networks (WSN), as a solution for buried water pipe monitoring, face a new set of challenges compared to traditional application for above-ground infrastructure monitoring. One of the main challenges for underground WSN deployment is the limited range (less than 3 m) at which reliable wireless underground communication can be achieved using radio signal propagation through the soil. To overcome this challenge, the purpose of this paper is to investigate a new approach for wireless underground communication using acoustic signal propagation along a buried water pipe. An acoustic communication system was developed based on the requirements of low cost (tens of pounds at most), low power supply capacity (in the order of 1 W-h) and miniature (centimetre scale) size for a wireless communication node. The developed system was further tested along a buried steel pipe in poorly graded SAND and a buried medium density polyethylene (MDPE) pipe in well graded SAND. With predicted acoustic attenuation of 1.3 dB/m and 2.1 dB/m along the buried steel and MDPE pipes, respectively, reliable acoustic communication is possible up to 17 m for the buried steel pipe and 11 m for the buried MDPE pipe. Although an important first step, more research is needed to validate the acoustic communication system along a wider water distribution pipe network. This paper shows the possibility of achieving reliable wireless underground communication along a buried water pipe (especially non-metallic material ones) using low-frequency acoustic propagation along the pipe wall.Novel communication system for buried water pipe monitoring using acoustic signal propagation along the pipe
Omotayo Farai, Nicole Metje, Carl Anthony, Ali Sadeghioon, David Chapman
International Journal of Pervasive Computing and Communications, Vol. 20, No. 2, pp.262-284

Wireless sensor networks (WSN), as a solution for buried water pipe monitoring, face a new set of challenges compared to traditional application for above-ground infrastructure monitoring. One of the main challenges for underground WSN deployment is the limited range (less than 3 m) at which reliable wireless underground communication can be achieved using radio signal propagation through the soil. To overcome this challenge, the purpose of this paper is to investigate a new approach for wireless underground communication using acoustic signal propagation along a buried water pipe.

An acoustic communication system was developed based on the requirements of low cost (tens of pounds at most), low power supply capacity (in the order of 1 W-h) and miniature (centimetre scale) size for a wireless communication node. The developed system was further tested along a buried steel pipe in poorly graded SAND and a buried medium density polyethylene (MDPE) pipe in well graded SAND.

With predicted acoustic attenuation of 1.3 dB/m and 2.1 dB/m along the buried steel and MDPE pipes, respectively, reliable acoustic communication is possible up to 17 m for the buried steel pipe and 11 m for the buried MDPE pipe.

Although an important first step, more research is needed to validate the acoustic communication system along a wider water distribution pipe network.

This paper shows the possibility of achieving reliable wireless underground communication along a buried water pipe (especially non-metallic material ones) using low-frequency acoustic propagation along the pipe wall.

]]>
Novel communication system for buried water pipe monitoring using acoustic signal propagation along the pipe10.1108/IJPCC-05-2022-0179International Journal of Pervasive Computing and Communications2023-10-06© 2023 Emerald Publishing LimitedOmotayo FaraiNicole MetjeCarl AnthonyAli SadeghioonDavid ChapmanInternational Journal of Pervasive Computing and Communications2022023-10-0610.1108/IJPCC-05-2022-0179https://www.emerald.com/insight/content/doi/10.1108/IJPCC-05-2022-0179/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Cooperative optimization techniques in distributed MAC protocols – a surveyhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-07-2022-0256/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestCross-layer approach in media access control (MAC) layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data transmissions in wireless sensor network (WSN) and Internet of Things (IoT) applications. Choosing the correct objective function in Nash equilibrium for game theory will address fairness index and resource allocation to the nodes. Game theory optimization for distributed may increase the network performance. The purpose of this study is to survey the various operations that can be carried out using distributive and adaptive MAC protocol. Hill climbing distributed MAC does not need a central coordination system and location-based transmission with neighbor awareness reduces transmission power. Distributed MAC in wireless networks is used to address the challenges like network lifetime, reduced energy consumption and for improving delay performance. In this paper, a survey is made on various cooperative communications in MAC protocols, optimization techniques used to improve MAC performance in various applications and mathematical approaches involved in game theory optimization for MAC protocol. Spatial reuse of channel improved by 3%–29%, and multichannel improves throughput by 8% using distributed MAC protocol. Nash equilibrium is found to perform well, which focuses on energy utility in the network by individual players. Fuzzy logic improves channel selection by 17% and secondary users’ involvement by 8%. Cross-layer approach in MAC layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data transmissions in WSN and IoT applications. Cross-layer and cooperative communication give energy savings of 27% and reduces hop distance by 4.7%. Choosing the correct objective function in Nash equilibrium for game theory will address fairness index and resource allocation to the nodes. Other optimization techniques can be applied for WSN to analyze the performance. Game theory optimization for distributed may increase the network performance. Optimal cuckoo search improves throughput by 90% and reduces delay by 91%. Stochastic approaches detect 80% attacks even in 90% malicious nodes. Channel allocations in centralized or static manner must be based on traffic demands whether dynamic traffic or fluctuated traffic. Usage of multimedia devices also increased which in turn increased the demand for high throughput. Cochannel interference keep on changing or mitigations occur which can be handled by proper resource allocations. Network survival is by efficient usage of valid patis in the network by avoiding transmission failures and time slots’ effective usage. Literature survey is carried out to find the methods which give better performance.Cooperative optimization techniques in distributed MAC protocols – a survey
Radha Subramanyam, Y. Adline Jancy, P. Nagabushanam
International Journal of Pervasive Computing and Communications, Vol. 20, No. 2, pp.285-307

Cross-layer approach in media access control (MAC) layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data transmissions in wireless sensor network (WSN) and Internet of Things (IoT) applications. Choosing the correct objective function in Nash equilibrium for game theory will address fairness index and resource allocation to the nodes. Game theory optimization for distributed may increase the network performance. The purpose of this study is to survey the various operations that can be carried out using distributive and adaptive MAC protocol. Hill climbing distributed MAC does not need a central coordination system and location-based transmission with neighbor awareness reduces transmission power.

Distributed MAC in wireless networks is used to address the challenges like network lifetime, reduced energy consumption and for improving delay performance. In this paper, a survey is made on various cooperative communications in MAC protocols, optimization techniques used to improve MAC performance in various applications and mathematical approaches involved in game theory optimization for MAC protocol.

Spatial reuse of channel improved by 3%–29%, and multichannel improves throughput by 8% using distributed MAC protocol. Nash equilibrium is found to perform well, which focuses on energy utility in the network by individual players. Fuzzy logic improves channel selection by 17% and secondary users’ involvement by 8%. Cross-layer approach in MAC layer will address interference and jamming problems. Hybrid distributed MAC can be used for simultaneous voice, data transmissions in WSN and IoT applications. Cross-layer and cooperative communication give energy savings of 27% and reduces hop distance by 4.7%. Choosing the correct objective function in Nash equilibrium for game theory will address fairness index and resource allocation to the nodes.

Other optimization techniques can be applied for WSN to analyze the performance.

Game theory optimization for distributed may increase the network performance. Optimal cuckoo search improves throughput by 90% and reduces delay by 91%. Stochastic approaches detect 80% attacks even in 90% malicious nodes.

Channel allocations in centralized or static manner must be based on traffic demands whether dynamic traffic or fluctuated traffic. Usage of multimedia devices also increased which in turn increased the demand for high throughput. Cochannel interference keep on changing or mitigations occur which can be handled by proper resource allocations. Network survival is by efficient usage of valid patis in the network by avoiding transmission failures and time slots’ effective usage.

Literature survey is carried out to find the methods which give better performance.

]]>
Cooperative optimization techniques in distributed MAC protocols – a survey10.1108/IJPCC-07-2022-0256International Journal of Pervasive Computing and Communications2023-10-11© 2023 Emerald Publishing LimitedRadha SubramanyamY. Adline JancyP. NagabushanamInternational Journal of Pervasive Computing and Communications2022023-10-1110.1108/IJPCC-07-2022-0256https://www.emerald.com/insight/content/doi/10.1108/IJPCC-07-2022-0256/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2023 Emerald Publishing Limited
Big data challenges and opportunities in Internet of Vehicles: a systematic reviewhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-09-2023-0250/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestBig data challenges and opportunities on the Internet of Vehicles (IoV) have emerged as a transformative paradigm to change intelligent transportation systems. With the growth of data-driven applications and the advances in data analysis techniques, the potential for data-adaptive innovation in IoV applications becomes an outstanding development in future IoV. Therefore, this paper aims to focus on big data in IoV and to provide an analysis of the current state of research. This review paper uses a systematic literature review methodology. It conducts a thorough search of academic databases to identify relevant scientific articles. By reviewing and analyzing the primary articles found in the big data in the IoV domain, 45 research articles from 2019 to 2023 were selected for detailed analysis. This paper discovers the main applications, use cases and primary contexts considered for big data in IoV. Next, it documents challenges, opportunities, future research directions and open issues. This paper is based on academic articles published from 2019 to 2023. Therefore, scientific outputs published before 2019 are omitted. This paper provides a thorough analysis of big data in IoV and considers distinct research questions corresponding to big data challenges and opportunities in IoV. It also provides valuable insights for researchers and practitioners in evolving this field by examining the existing fields and future directions for big data in the IoV ecosystem.Big data challenges and opportunities in Internet of Vehicles: a systematic review
Atefeh Hemmati, Mani Zarei, Amir Masoud Rahmani
International Journal of Pervasive Computing and Communications, Vol. 20, No. 2, pp.308-342

Big data challenges and opportunities on the Internet of Vehicles (IoV) have emerged as a transformative paradigm to change intelligent transportation systems. With the growth of data-driven applications and the advances in data analysis techniques, the potential for data-adaptive innovation in IoV applications becomes an outstanding development in future IoV. Therefore, this paper aims to focus on big data in IoV and to provide an analysis of the current state of research.

This review paper uses a systematic literature review methodology. It conducts a thorough search of academic databases to identify relevant scientific articles. By reviewing and analyzing the primary articles found in the big data in the IoV domain, 45 research articles from 2019 to 2023 were selected for detailed analysis.

This paper discovers the main applications, use cases and primary contexts considered for big data in IoV. Next, it documents challenges, opportunities, future research directions and open issues.

This paper is based on academic articles published from 2019 to 2023. Therefore, scientific outputs published before 2019 are omitted.

This paper provides a thorough analysis of big data in IoV and considers distinct research questions corresponding to big data challenges and opportunities in IoV. It also provides valuable insights for researchers and practitioners in evolving this field by examining the existing fields and future directions for big data in the IoV ecosystem.

]]>
Big data challenges and opportunities in Internet of Vehicles: a systematic review10.1108/IJPCC-09-2023-0250International Journal of Pervasive Computing and Communications2024-02-29© 2024 Emerald Publishing LimitedAtefeh HemmatiMani ZareiAmir Masoud RahmaniInternational Journal of Pervasive Computing and Communications2022024-02-2910.1108/IJPCC-09-2023-0250https://www.emerald.com/insight/content/doi/10.1108/IJPCC-09-2023-0250/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2024 Emerald Publishing Limited
Video compression based on zig-zag 3D DCT and run-length encoding for multimedia communication systemshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-01-2022-0012/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestWith the advent of technology, a huge amount of data is being transmitted and received through the internet. Large bandwidth and storage are required for the exchange of data and storage, respectively. Hence, compression of the data which is to be transmitted over the channel is unavoidable. The main purpose of the proposed system is to use the bandwidth effectively. The videos are compressed at the transmitter’s end and reconstructed at the receiver’s end. Compression techniques even help for smaller storage requirements. The paper proposes a novel compression technique for three-dimensional (3D) videos using a zig-zag 3D discrete cosine transform. The method operates a 3D discrete cosine transform on the videos, followed by a zig-zag scanning process. Finally, to convert the data into a single bit stream for transmission, a run-length encoding technique is used. The videos are reconstructed by using the inverse 3D discrete cosine transform, inverse zig-zag scanning (quantization) and inverse run length coding techniques. The proposed method is simple and reduces the complexity of the convolutional techniques. Coding reduction, code word reduction, peak signal to noise ratio (PSNR), mean square error, compression percent and compression ratio values are calculated, and the dominance of the proposed method over the convolutional methods is seen. With zig-zag quantization and run length encoding using 3D discrete cosine transform for 3D video compression, gives compression up to 90% with a PSNR of 41.98 dB. The proposed method can be used in multimedia applications where bandwidth, storage and data expenses are the major issues.Video compression based on zig-zag 3D DCT and run-length encoding for multimedia communication systems
Sravanthi Chutke, Nandhitha N.M., Praveen Kumar Lendale
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

With the advent of technology, a huge amount of data is being transmitted and received through the internet. Large bandwidth and storage are required for the exchange of data and storage, respectively. Hence, compression of the data which is to be transmitted over the channel is unavoidable. The main purpose of the proposed system is to use the bandwidth effectively. The videos are compressed at the transmitter’s end and reconstructed at the receiver’s end. Compression techniques even help for smaller storage requirements.

The paper proposes a novel compression technique for three-dimensional (3D) videos using a zig-zag 3D discrete cosine transform. The method operates a 3D discrete cosine transform on the videos, followed by a zig-zag scanning process. Finally, to convert the data into a single bit stream for transmission, a run-length encoding technique is used. The videos are reconstructed by using the inverse 3D discrete cosine transform, inverse zig-zag scanning (quantization) and inverse run length coding techniques. The proposed method is simple and reduces the complexity of the convolutional techniques.

Coding reduction, code word reduction, peak signal to noise ratio (PSNR), mean square error, compression percent and compression ratio values are calculated, and the dominance of the proposed method over the convolutional methods is seen.

With zig-zag quantization and run length encoding using 3D discrete cosine transform for 3D video compression, gives compression up to 90% with a PSNR of 41.98 dB. The proposed method can be used in multimedia applications where bandwidth, storage and data expenses are the major issues.

]]>
Video compression based on zig-zag 3D DCT and run-length encoding for multimedia communication systems10.1108/IJPCC-01-2022-0012International Journal of Pervasive Computing and Communications2022-07-25© 2022 Emerald Publishing LimitedSravanthi ChutkeNandhitha N.M.Praveen Kumar LendaleInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-07-2510.1108/IJPCC-01-2022-0012https://www.emerald.com/insight/content/doi/10.1108/IJPCC-01-2022-0012/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
AI federated learning based improvised random Forest classifier with error reduction mechanism for skewed data setshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0034/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestReferred data set produces reliable information about the network flows and common attacks meeting with real-world criteria. Accordingly, this study aims to focus on the use of imbalanced intrusion detection benchmark knowledge discovery in database (KDD) data set. KDD data set is most preferably used by many researchers for experimentation and analysis. The proposed algorithm improvised random forest classification with error tuning factors (IRFCETF) deals with experimentation on KDD data set and evaluates the performance of a complete set of network traffic features through IRFCETF. In the current era of applications, the attention of researchers is immersed by a diverse number of existing time applications that deals with imbalanced data classification (ImDC). Real-time application areas, artificial intelligence (AI), Industrial Internet of Things (IIoT), etc. are dealing ImDC undergo with diverted classification performance due to skewed data distribution (SkDD). There are numerous application areas that deal with SkDD. Many of the data applications in AI and IIoT face the diverted data classification rate in SkDD. In recent advancements, there is an exponential expansion in the volume of computer network data and related application developments. Intrusion detection is one of the demanding applications of ImDC. The proposed study focusses on imbalanced intrusion benchmark data set, KDD data set and other benchmark data set with the proposed IRFCETF approach. IRFCETF justifies the enriched classification performance on imbalanced data set over the existing approach. The purpose of this work is to review imbalanced data applications in numerous application areas including AI and IIoT and tuning the performance with respect to principal component analysis. This study also focusses on the out-of-bag error performance-tuning factor. Experimental results on KDD data set shows that proposed algorithm gives enriched performance. For referred intrusion detection data set, IRFCETF classification accuracy is 99.57% and error rate is 0.43%. This research work extended for further improvements in classification techniques with multiple correspondence analysis (MCA); hierarchical MCA can be focussed with the use of classification models for wide range of skewed data sets. The metrics enhancement is measurable and helpful in dealing with intrusion detection systems–related imbalanced applications in current application domains such as security, AI and IIoT digitization. Analytical results show improvised metrics of the proposed approach than other traditional machine learning algorithms. Thus, error-tuning parameter creates a measurable impact on classification accuracy is justified with the proposed IRFCETF. Proposed algorithm is useful in numerous IIoT applications such as health care, machinery automation etc. This research work addressed classification metric enhancement approach IRFCETF. The proposed method yields a test set categorization for each case with error reduction mechanism.AI federated learning based improvised random Forest classifier with error reduction mechanism for skewed data sets
Anjali More, Dipti Rana
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Referred data set produces reliable information about the network flows and common attacks meeting with real-world criteria. Accordingly, this study aims to focus on the use of imbalanced intrusion detection benchmark knowledge discovery in database (KDD) data set. KDD data set is most preferably used by many researchers for experimentation and analysis. The proposed algorithm improvised random forest classification with error tuning factors (IRFCETF) deals with experimentation on KDD data set and evaluates the performance of a complete set of network traffic features through IRFCETF.

In the current era of applications, the attention of researchers is immersed by a diverse number of existing time applications that deals with imbalanced data classification (ImDC). Real-time application areas, artificial intelligence (AI), Industrial Internet of Things (IIoT), etc. are dealing ImDC undergo with diverted classification performance due to skewed data distribution (SkDD). There are numerous application areas that deal with SkDD. Many of the data applications in AI and IIoT face the diverted data classification rate in SkDD. In recent advancements, there is an exponential expansion in the volume of computer network data and related application developments. Intrusion detection is one of the demanding applications of ImDC. The proposed study focusses on imbalanced intrusion benchmark data set, KDD data set and other benchmark data set with the proposed IRFCETF approach. IRFCETF justifies the enriched classification performance on imbalanced data set over the existing approach. The purpose of this work is to review imbalanced data applications in numerous application areas including AI and IIoT and tuning the performance with respect to principal component analysis. This study also focusses on the out-of-bag error performance-tuning factor.

Experimental results on KDD data set shows that proposed algorithm gives enriched performance. For referred intrusion detection data set, IRFCETF classification accuracy is 99.57% and error rate is 0.43%.

This research work extended for further improvements in classification techniques with multiple correspondence analysis (MCA); hierarchical MCA can be focussed with the use of classification models for wide range of skewed data sets.

The metrics enhancement is measurable and helpful in dealing with intrusion detection systems–related imbalanced applications in current application domains such as security, AI and IIoT digitization. Analytical results show improvised metrics of the proposed approach than other traditional machine learning algorithms. Thus, error-tuning parameter creates a measurable impact on classification accuracy is justified with the proposed IRFCETF.

Proposed algorithm is useful in numerous IIoT applications such as health care, machinery automation etc.

This research work addressed classification metric enhancement approach IRFCETF. The proposed method yields a test set categorization for each case with error reduction mechanism.

]]>
AI federated learning based improvised random Forest classifier with error reduction mechanism for skewed data sets10.1108/IJPCC-02-2022-0034International Journal of Pervasive Computing and Communications2022-08-19© 2022 Emerald Publishing LimitedAnjali MoreDipti RanaInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-08-1910.1108/IJPCC-02-2022-0034https://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0034/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Secure data collection and transmission for IoMT architecture integrated with federated learninghttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0042/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe purpose of this paper is to secure health data collection and transmission (SHDCT). In this system, a native network consists of portable smart devices that interact with multiple gateways. It entails IoMT devices and wearables connecting to exchange sensitive data with a sensor node which performs the aggeration process and then communicates the data using a Fog server. If the aggregator sensor loses the connection from the Fog server, it will be unable to submit data directly to the Fog server. The node transmits encrypted information with a neighboring sensor and sends it to the Fog server integrated with federated learning, which encrypts data to the existing data. The fog server performs the operations on the measured data, and the values are stored in the local storage area and later it is updated to the cloud server. SHDCT uses an Internet-of-things (IoT)-based monitoring network, making it possible for smart devices to connect and interact with each other. The main purpose of the monitoring network has been in the collection of biological data and additional information from mobile devices to the patients. The monitoring network is composed of three different types of smart devices that is at the heart of the IoT. It has been addressed in this work how to design an architecture for safe data aggregation in heterogeneous IoT-federated learning-enabled wireless sensor networks (WSNs), which makes use of basic encoding and data aggregation methods to achieve this. The authors suggest that the small gateway node (SGN) captures all of the sensed data from the SD and uses a simple, lightweight encoding scheme and cryptographic techniques to convey the data to the gateway node (GWN). The GWN gets all of the medical data from SGN and ensures that the data is accurate and up to date. If the data obtained is trustworthy, then the medical data should be aggregated and sent to the Fog server for further processing. The Java programming language simulates and analyzes the proposed SHDCT model for deployment and message initiation. When comparing the SHDCT scheme to the SPPDA and electrohydrodynamic atomisation (EHDA) schemes, the results show that the SHDCT method performs significantly better. When compared with the SPPDA and EHDA schemes, the suggested SHDCT plan necessitates a lower communication cost. In comparison to EHDA and SPPDA, SHDCT achieves 4.72% and 13.59% less, respectively. When compared to other transmission techniques, SHDCT has a higher transmission ratio. When compared with EHDA and SPPDA, SHDCT achieves 8.47% and 24.41% higher transmission ratios, respectively. When compared with other ways it uses less electricity. When compared with EHDA and SPPDA, SHDCT achieves 5.85% and 18.86% greater residual energy, respectively. In the health care sector, a series of interconnected medical devices collect data using IoT networks in the health care domain. Preventive, predictive, personalized and participatory care is becoming increasingly popular in the health care sector. Safe data collection and transfer to a centralized server is a challenging scenario. This study presents a mechanism for SHDCT. The mechanism consists of Smart healthcare IoT devices working on federated learning that link up with one another to exchange health data. Health data is sensitive and needs to be exchanged securely and efficiently. In the mechanism, the sensing devices send data to a SGN. This SGN uses a lightweight encoding scheme and performs cryptography techniques to communicate the data with the GWN. The GWN gets all the health data from the SGN and makes it possible to confirm that the data is validated. If the received data is reliable, then aggregate the medical data and transmit it to the Fog server for further process. The performance parameters are compared with the other systems in terms of communication costs, transmission ratio and energy use.Secure data collection and transmission for IoMT architecture integrated with federated learning
Priyanka Kumari Bhansali, Dilendra Hiran, Kamal Gulati
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

The purpose of this paper is to secure health data collection and transmission (SHDCT). In this system, a native network consists of portable smart devices that interact with multiple gateways. It entails IoMT devices and wearables connecting to exchange sensitive data with a sensor node which performs the aggeration process and then communicates the data using a Fog server. If the aggregator sensor loses the connection from the Fog server, it will be unable to submit data directly to the Fog server. The node transmits encrypted information with a neighboring sensor and sends it to the Fog server integrated with federated learning, which encrypts data to the existing data. The fog server performs the operations on the measured data, and the values are stored in the local storage area and later it is updated to the cloud server.

SHDCT uses an Internet-of-things (IoT)-based monitoring network, making it possible for smart devices to connect and interact with each other. The main purpose of the monitoring network has been in the collection of biological data and additional information from mobile devices to the patients. The monitoring network is composed of three different types of smart devices that is at the heart of the IoT.

It has been addressed in this work how to design an architecture for safe data aggregation in heterogeneous IoT-federated learning-enabled wireless sensor networks (WSNs), which makes use of basic encoding and data aggregation methods to achieve this. The authors suggest that the small gateway node (SGN) captures all of the sensed data from the SD and uses a simple, lightweight encoding scheme and cryptographic techniques to convey the data to the gateway node (GWN). The GWN gets all of the medical data from SGN and ensures that the data is accurate and up to date. If the data obtained is trustworthy, then the medical data should be aggregated and sent to the Fog server for further processing. The Java programming language simulates and analyzes the proposed SHDCT model for deployment and message initiation. When comparing the SHDCT scheme to the SPPDA and electrohydrodynamic atomisation (EHDA) schemes, the results show that the SHDCT method performs significantly better. When compared with the SPPDA and EHDA schemes, the suggested SHDCT plan necessitates a lower communication cost. In comparison to EHDA and SPPDA, SHDCT achieves 4.72% and 13.59% less, respectively. When compared to other transmission techniques, SHDCT has a higher transmission ratio. When compared with EHDA and SPPDA, SHDCT achieves 8.47% and 24.41% higher transmission ratios, respectively. When compared with other ways it uses less electricity. When compared with EHDA and SPPDA, SHDCT achieves 5.85% and 18.86% greater residual energy, respectively.

In the health care sector, a series of interconnected medical devices collect data using IoT networks in the health care domain. Preventive, predictive, personalized and participatory care is becoming increasingly popular in the health care sector. Safe data collection and transfer to a centralized server is a challenging scenario. This study presents a mechanism for SHDCT. The mechanism consists of Smart healthcare IoT devices working on federated learning that link up with one another to exchange health data. Health data is sensitive and needs to be exchanged securely and efficiently. In the mechanism, the sensing devices send data to a SGN. This SGN uses a lightweight encoding scheme and performs cryptography techniques to communicate the data with the GWN. The GWN gets all the health data from the SGN and makes it possible to confirm that the data is validated. If the received data is reliable, then aggregate the medical data and transmit it to the Fog server for further process. The performance parameters are compared with the other systems in terms of communication costs, transmission ratio and energy use.

]]>
Secure data collection and transmission for IoMT architecture integrated with federated learning10.1108/IJPCC-02-2022-0042International Journal of Pervasive Computing and Communications2022-05-19© 2022 Emerald Publishing LimitedPriyanka Kumari BhansaliDilendra HiranKamal GulatiInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-05-1910.1108/IJPCC-02-2022-0042https://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0042/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Hybrid cumulative approach for localization of nodes with adaptive threshold gradient feature on energy minimization using federated learninghttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0045/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestLocalization of the nodes is crucial for gaining access of different nodes which would provision in extreme areas where networks are unreachable. The feature of localization of nodes has become a significant study where multiple features on distance model are implicated on predictive and heuristic model for each set of localization parameters that govern the design on energy minimization with proposed adaptive threshold gradient feature (ATGF) model. A received signal strength indicator (RSSI) model with node estimated features is implicated with localization problem and enhanced with hybrid cumulative approach (HCA) algorithm for node optimizations with distance predicting. Using a theoretical or empirical signal propagation model, the RSSI (known transmitting power) is converted to distance, the received power (measured at the receiving node) is converted to distance and the distance is converted to RSSI (known receiving power). As a result, the approximate distance between the transceiver node and the receiver may be determined by measuring the intensity of the received signal. After acquiring information on the distance between the anchor node and the unknown node, the location of the unknown node may be determined using either the trilateral technique or the maximum probability estimate approach, depending on the circumstances using federated learning. Improvisation of localization for wireless sensor network has become one of the prime design features for estimating the different conditional changes externally and internally. One such feature of improvement is observed in this paper, via HCA where each feature of localization is depicted with machine learning algorithms imparting the energy reduction problem for each newer localized nodes in Section 5. All affected parametric features on energy levels and localization problem for newer and extinct nodes are implicated with hybrid cumulative approach as in Section 4. The proposed algorithm (HCA with AGTF) has implicated with significant change in energy levels of nodes which are generated newly and which are non-active for a stipulated time which are mentioned and tabulated in figures and tables in Section 6. Localization of the nodes is crucial for gaining access of different nodes which would provision in extreme areas where networks are unreachable. The feature of localization of nodes has become a significant study where multiple features on distance model are implicated on predictive and heuristic model for each set of localization parameters that govern the design on energy minimization with proposed ATGF model. An RSSI model with node estimated features is implicated with localization problem and enhanced with HCA algorithm for node optimizations with distance predicting.Hybrid cumulative approach for localization of nodes with adaptive threshold gradient feature on energy minimization using federated learning
Adumbabu I., K. Selvakumar
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Localization of the nodes is crucial for gaining access of different nodes which would provision in extreme areas where networks are unreachable. The feature of localization of nodes has become a significant study where multiple features on distance model are implicated on predictive and heuristic model for each set of localization parameters that govern the design on energy minimization with proposed adaptive threshold gradient feature (ATGF) model. A received signal strength indicator (RSSI) model with node estimated features is implicated with localization problem and enhanced with hybrid cumulative approach (HCA) algorithm for node optimizations with distance predicting.

Using a theoretical or empirical signal propagation model, the RSSI (known transmitting power) is converted to distance, the received power (measured at the receiving node) is converted to distance and the distance is converted to RSSI (known receiving power). As a result, the approximate distance between the transceiver node and the receiver may be determined by measuring the intensity of the received signal. After acquiring information on the distance between the anchor node and the unknown node, the location of the unknown node may be determined using either the trilateral technique or the maximum probability estimate approach, depending on the circumstances using federated learning.

Improvisation of localization for wireless sensor network has become one of the prime design features for estimating the different conditional changes externally and internally. One such feature of improvement is observed in this paper, via HCA where each feature of localization is depicted with machine learning algorithms imparting the energy reduction problem for each newer localized nodes in Section 5. All affected parametric features on energy levels and localization problem for newer and extinct nodes are implicated with hybrid cumulative approach as in Section 4. The proposed algorithm (HCA with AGTF) has implicated with significant change in energy levels of nodes which are generated newly and which are non-active for a stipulated time which are mentioned and tabulated in figures and tables in Section 6.

Localization of the nodes is crucial for gaining access of different nodes which would provision in extreme areas where networks are unreachable. The feature of localization of nodes has become a significant study where multiple features on distance model are implicated on predictive and heuristic model for each set of localization parameters that govern the design on energy minimization with proposed ATGF model. An RSSI model with node estimated features is implicated with localization problem and enhanced with HCA algorithm for node optimizations with distance predicting.

]]>
Hybrid cumulative approach for localization of nodes with adaptive threshold gradient feature on energy minimization using federated learning10.1108/IJPCC-02-2022-0045International Journal of Pervasive Computing and Communications2022-06-17© 2020 Emerald Publishing LimitedAdumbabu I.K. SelvakumarInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-06-1710.1108/IJPCC-02-2022-0045https://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0045/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2020 Emerald Publishing Limited
An optimized and efficient multiuser data sharing using the selection scheme design secure approach and federated learning in cloud environmenthttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0047/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestUntil now, a lot of research has been done and applied to provide security and original data from one user to another, such as third-party auditing and several schemes for securing the data, such as the generation of the key with the help of encryption algorithms like Rivest–Shamir–Adleman and others. Here are some of the related works that have been done previously. Remote damage control resuscitation (RDCR) scheme by Yan et al. (2017) is proposed based on the minimum bandwidth. By enabling the third party to perform the verification of public integrity. Although it supports the repair management for the corrupt data and tries to recover the original data, in practicality it fails to do so, and thus it takes more computation and communication cost than our proposed system. In a paper by Chen et al. (2015), using broadcast encryption, an idea for cloud storage data sharing has been developed. This technique aims to accomplish both broadcast data and dynamic sharing, allowing users to join and leave a group without affecting the electronic press kit (EPK). In this case, the theoretical notion was true and new, but the system’s practicality and efficiency were not acceptable, and the system’s security was also jeopardised because it proposed adding a member without altering any keys. In this research, an identity-based encryption strategy for data sharing was investigated, as well as key management and metadata techniques to improve model security (Jiang and Guo, 2017). The forward and reverse ciphertext security is supplied here. However, it is more difficult to put into practice, and one of its limitations is that it can only be used for very large amounts of cloud storage. Here, it extends support for dynamic data modification by batch auditing. The important feature of the secure and efficient privacy preserving provable data possession in cloud storage scheme was to support every important feature which includes data dynamics, privacy preservation, batch auditing and blockers verification for an untrusted and an outsourced storage model (Pathare and Chouragadec, 2017). A homomorphic signature mechanism was devised to prevent the usage of the public key certificate, which was based on the new id. This signature system was shown to be resistant to the id attack on the random oracle model and the assault of forged message (Nayak and Tripathy, 2018; Lin et al., 2017). When storing data in a public cloud, one issue is that the data owner must give an enormous number of keys to the users in order for them to access the files. At this place, the knowledge assisted software engineering (KASE) plan was publicly unveiled for the first time. While sharing a huge number of documents, the data owner simply has to supply the specific key to the user, and the user only needs to provide the single trapdoor. Although the concept is innovative, the KASE technique does not apply to the increasingly common manufactured cloud. Cui et al. (2016) claim that as the amount of data grows, distribution management system (DMS) will be unable to handle it. As a result, various proven data possession (PDP) schemes have been developed, and practically all data lacks security. So, here in these certificates, PDP was introduced, which was based on bilinear pairing. Because of its feature of being robust as well as efficient, this is mostly applicable in DMS. The main purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research provides an efficient and secure protocol for multiple user data in the cloud, allowing many users to easily share data. The methodology and contribution of this paper is given as follows. The major goal of this study is to design and implement a secure cloud infrastructure for sharing group data. This study provides an efficient and secure protocol for multiple user data in cloud, allowing several users to share data without difficulty. The primary purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research develops an efficient and secure protocol for multiple user data in the cloud, allowing numerous users to exchange data without difficulty. Selection scheme design (SSD) comprises two algorithms; first algorithm is designed for limited users and algorithm 2 is redesigned for the multiple users. Further, the authors design SSD-security protocol which comprises a three-phase model, namely, Phase 1, Phase 2 and Phase 3. Phase 1 generates the parameters and distributes the private key, the second phase generates the general key for all the users that are available and third phase is designed to prevent the dishonest user to entertain in data sharing. Data sharing in cloud computing provides unlimited computational resources and storage to enterprise and individuals; moreover, cloud computing leads to several privacy and security concerns such as fault tolerance, reliability, confidentiality and data integrity. Furthermore, the key consensus mechanism is fundamental cryptographic primitive for secure communication; moreover, motivated by this phenomenon, the authors developed SSDmechanismwhich embraces the multiple users in the data-sharing model. Files shared in the cloud should be encrypted for security purpose; later these files are decrypted for the users to access the file. Furthermore, the key consensus process is a crucial cryptographic primitive for secure communication; additionally, the authors devised the SSD mechanism, which incorporates numerous users in the data-sharing model, as a result of this phenomena. For evaluation of the SSD method, the authors have considered the ideal environment of the system, that is, the authors have used java as a programming language and eclipse as the integrated drive electronics tool for the proposed model evaluation. Hardware configuration of the model is such that it is packed with 4 GB RAM and i7 processor, the authors have used the PBC library for the pairing operations (PBC Library, 2022). Furthermore, in the following section of this paper, the number of users is varied to compare with the existing methodology RDIC (Li et al., 2020). For the purposes of the SSD-security protocol, a prime number is chosen as the number of users in this work.An optimized and efficient multiuser data sharing using the selection scheme design secure approach and federated learning in cloud environment
Shubangini Patil, Rekha Patil
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Until now, a lot of research has been done and applied to provide security and original data from one user to another, such as third-party auditing and several schemes for securing the data, such as the generation of the key with the help of encryption algorithms like Rivest–Shamir–Adleman and others. Here are some of the related works that have been done previously. Remote damage control resuscitation (RDCR) scheme by Yan et al. (2017) is proposed based on the minimum bandwidth. By enabling the third party to perform the verification of public integrity. Although it supports the repair management for the corrupt data and tries to recover the original data, in practicality it fails to do so, and thus it takes more computation and communication cost than our proposed system. In a paper by Chen et al. (2015), using broadcast encryption, an idea for cloud storage data sharing has been developed. This technique aims to accomplish both broadcast data and dynamic sharing, allowing users to join and leave a group without affecting the electronic press kit (EPK). In this case, the theoretical notion was true and new, but the system’s practicality and efficiency were not acceptable, and the system’s security was also jeopardised because it proposed adding a member without altering any keys. In this research, an identity-based encryption strategy for data sharing was investigated, as well as key management and metadata techniques to improve model security (Jiang and Guo, 2017). The forward and reverse ciphertext security is supplied here. However, it is more difficult to put into practice, and one of its limitations is that it can only be used for very large amounts of cloud storage. Here, it extends support for dynamic data modification by batch auditing. The important feature of the secure and efficient privacy preserving provable data possession in cloud storage scheme was to support every important feature which includes data dynamics, privacy preservation, batch auditing and blockers verification for an untrusted and an outsourced storage model (Pathare and Chouragadec, 2017). A homomorphic signature mechanism was devised to prevent the usage of the public key certificate, which was based on the new id. This signature system was shown to be resistant to the id attack on the random oracle model and the assault of forged message (Nayak and Tripathy, 2018; Lin et al., 2017). When storing data in a public cloud, one issue is that the data owner must give an enormous number of keys to the users in order for them to access the files. At this place, the knowledge assisted software engineering (KASE) plan was publicly unveiled for the first time. While sharing a huge number of documents, the data owner simply has to supply the specific key to the user, and the user only needs to provide the single trapdoor. Although the concept is innovative, the KASE technique does not apply to the increasingly common manufactured cloud. Cui et al. (2016) claim that as the amount of data grows, distribution management system (DMS) will be unable to handle it. As a result, various proven data possession (PDP) schemes have been developed, and practically all data lacks security. So, here in these certificates, PDP was introduced, which was based on bilinear pairing. Because of its feature of being robust as well as efficient, this is mostly applicable in DMS. The main purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research provides an efficient and secure protocol for multiple user data in the cloud, allowing many users to easily share data.

The methodology and contribution of this paper is given as follows. The major goal of this study is to design and implement a secure cloud infrastructure for sharing group data. This study provides an efficient and secure protocol for multiple user data in cloud, allowing several users to share data without difficulty. The primary purpose of this research is to design and implement a secure cloud infrastructure for sharing group data. This research develops an efficient and secure protocol for multiple user data in the cloud, allowing numerous users to exchange data without difficulty. Selection scheme design (SSD) comprises two algorithms; first algorithm is designed for limited users and algorithm 2 is redesigned for the multiple users. Further, the authors design SSD-security protocol which comprises a three-phase model, namely, Phase 1, Phase 2 and Phase 3. Phase 1 generates the parameters and distributes the private key, the second phase generates the general key for all the users that are available and third phase is designed to prevent the dishonest user to entertain in data sharing.

Data sharing in cloud computing provides unlimited computational resources and storage to enterprise and individuals; moreover, cloud computing leads to several privacy and security concerns such as fault tolerance, reliability, confidentiality and data integrity. Furthermore, the key consensus mechanism is fundamental cryptographic primitive for secure communication; moreover, motivated by this phenomenon, the authors developed SSDmechanismwhich embraces the multiple users in the data-sharing model.

Files shared in the cloud should be encrypted for security purpose; later these files are decrypted for the users to access the file. Furthermore, the key consensus process is a crucial cryptographic primitive for secure communication; additionally, the authors devised the SSD mechanism, which incorporates numerous users in the data-sharing model, as a result of this phenomena. For evaluation of the SSD method, the authors have considered the ideal environment of the system, that is, the authors have used java as a programming language and eclipse as the integrated drive electronics tool for the proposed model evaluation. Hardware configuration of the model is such that it is packed with 4 GB RAM and i7 processor, the authors have used the PBC library for the pairing operations (PBC Library, 2022). Furthermore, in the following section of this paper, the number of users is varied to compare with the existing methodology RDIC (Li et al., 2020). For the purposes of the SSD-security protocol, a prime number is chosen as the number of users in this work.

]]>
An optimized and efficient multiuser data sharing using the selection scheme design secure approach and federated learning in cloud environment10.1108/IJPCC-02-2022-0047International Journal of Pervasive Computing and Communications2022-06-22© 2020 Emerald Publishing LimitedShubangini PatilRekha PatilInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-06-2210.1108/IJPCC-02-2022-0047https://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0047/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2020 Emerald Publishing Limited
Federate learning of corporate social authority and industry 4.0 that focus on young people: a strategic management framework for human resourceshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0056/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe young population of the globe is defined by individuals aged 15 to 24 years. Based on statistics from the Instituto Brasileiro de Geografia e Estatística (IBGE), the second largest women population among 15 years as well as 19 years was in 2017 only behind 35 and 39 years. At this time, the Brazilian male population was higher. The difficulties of the young generation affected the preceding generation and promoted social dynamism. The worldwide data shows that the generation of young and the digital world have been constantly sought, but in reality, approximately one-third of the population in 2017 had no access to the internet. The worldwide movement around topics such as strategy on its threefold basis and Industry 4.0 enable a link to company duty towards society to be established. This present study was produced from 1 March 2020 to 2 September 2020 via resources of human and literature evaluation relating to the idea of strategic, Industry 4.0, the responsibility of society and the creation of youth. Its motive is the global creation of youth. Two recommendations should be made after studying the literature and information gathering that enabled “analyzing social responsibility of the company and industry 4.0 with a pivot on young creation: a strategic framework for resources of human management”. The adoption of defensible practices and technology bring forth by the revolution in industrial is emphasized worldwide. The focus on the usage of these ideas is essential, so that young people can absorb the workforce in the labour market. To achieve this, the CSR idea combines this theoretical triple-created recent study.Federate learning of corporate social authority and industry 4.0 that focus on young people: a strategic management framework for human resources
V.P. Sriram, M.A. Sikandar, Eti Khatri, Somya Choubey, Ity Patni, Lakshminarayana K., Kamal Gulati
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

The young population of the globe is defined by individuals aged 15 to 24 years. Based on statistics from the Instituto Brasileiro de Geografia e Estatística (IBGE), the second largest women population among 15 years as well as 19 years was in 2017 only behind 35 and 39 years. At this time, the Brazilian male population was higher. The difficulties of the young generation affected the preceding generation and promoted social dynamism. The worldwide data shows that the generation of young and the digital world have been constantly sought, but in reality, approximately one-third of the population in 2017 had no access to the internet.

The worldwide movement around topics such as strategy on its threefold basis and Industry 4.0 enable a link to company duty towards society to be established. This present study was produced from 1 March 2020 to 2 September 2020 via resources of human and literature evaluation relating to the idea of strategic, Industry 4.0, the responsibility of society and the creation of youth. Its motive is the global creation of youth. Two recommendations should be made after studying the literature and information gathering that enabled “analyzing social responsibility of the company and industry 4.0 with a pivot on young creation: a strategic framework for resources of human management”.

The adoption of defensible practices and technology bring forth by the revolution in industrial is emphasized worldwide.

The focus on the usage of these ideas is essential, so that young people can absorb the workforce in the labour market. To achieve this, the CSR idea combines this theoretical triple-created recent study.

]]>
Federate learning of corporate social authority and industry 4.0 that focus on young people: a strategic management framework for human resources10.1108/IJPCC-02-2022-0056International Journal of Pervasive Computing and Communications2022-07-12© 2022 Emerald Publishing LimitedV.P. SriramM.A. SikandarEti KhatriSomya ChoubeyIty PatniLakshminarayana K.Kamal GulatiInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-07-1210.1108/IJPCC-02-2022-0056https://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0056/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
IIBES: a proposed framework to improve the identity-based encryption system for securing federated learninghttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0073/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestIn the field of cryptography, authentication, secrecy and identification can be accomplished by use of secret keys for any computer-based system. The need to acquire certificates endorsed through CA to substantiate users for the barter of encoded communications is one of the most significant constraints for the extensive recognition of PKC, as the technique takes too much time and susceptible to error. PKC’s certificate and key management operating costs are reduced with IBC. IBE is a crucial primeval in IBC. The thought behind presenting the IBE scheme was to diminish the complexity of certificate and key management, but it also gives rise to key escrow and key revocation problem, which provides access to unauthorised users for the encrypted information. This paper aims to compare the result of IIBES with the existing system and to provide security analysis for the same and the proposed system can be used for the security in federated learning. Furthermore, it can be implemented using other encryption/decryption algorithms like elliptic curve cryptography (ECC) to compare the execution efficiency. The proposed system can be used for the security in federated learning. As a result, a novel enhanced IBE scheme: IIBES is suggested and implemented in JAVA programming language using RSA algorithm, which eradicates the key escrow problem through eliminating the need for a KGC and key revocation problem by sing sub-KGC (SKGC) and a shared secret with nonce. IIBES also provides authentication through IBS as well as it can be used for securing the data in federated learning.IIBES: a proposed framework to improve the identity-based encryption system for securing federated learning
Maitri Patel, Rajan Patel, Nimisha Patel, Parita Shah, Kamal Gulati
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

In the field of cryptography, authentication, secrecy and identification can be accomplished by use of secret keys for any computer-based system. The need to acquire certificates endorsed through CA to substantiate users for the barter of encoded communications is one of the most significant constraints for the extensive recognition of PKC, as the technique takes too much time and susceptible to error. PKC’s certificate and key management operating costs are reduced with IBC. IBE is a crucial primeval in IBC. The thought behind presenting the IBE scheme was to diminish the complexity of certificate and key management, but it also gives rise to key escrow and key revocation problem, which provides access to unauthorised users for the encrypted information.

This paper aims to compare the result of IIBES with the existing system and to provide security analysis for the same and the proposed system can be used for the security in federated learning.

Furthermore, it can be implemented using other encryption/decryption algorithms like elliptic curve cryptography (ECC) to compare the execution efficiency. The proposed system can be used for the security in federated learning.

As a result, a novel enhanced IBE scheme: IIBES is suggested and implemented in JAVA programming language using RSA algorithm, which eradicates the key escrow problem through eliminating the need for a KGC and key revocation problem by sing sub-KGC (SKGC) and a shared secret with nonce. IIBES also provides authentication through IBS as well as it can be used for securing the data in federated learning.

]]>
IIBES: a proposed framework to improve the identity-based encryption system for securing federated learning10.1108/IJPCC-02-2022-0073International Journal of Pervasive Computing and Communications2022-06-24© 2020 Emerald Publishing LimitedMaitri PatelRajan PatelNimisha PatelParita ShahKamal GulatiInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-06-2410.1108/IJPCC-02-2022-0073https://www.emerald.com/insight/content/doi/10.1108/IJPCC-02-2022-0073/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2020 Emerald Publishing Limited
Energy efficient multi-tasking for edge computing using federated learninghttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0106/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe purpose of this paper is to improve the existing paradigm of edge computing to maintain a balanced energy usage. The new greedy algorithm is proposed to balance the energy consumption in edge computing. The new greedy algorithm can balance energy more efficiently than the random approach by an average of 66.59 percent. The results are shown in this paper which are better as compared to existing algorithms.Energy efficient multi-tasking for edge computing using federated learning
Mukesh Soni, Nihar Ranjan Nayak, Ashima Kalra, Sheshang Degadwala, Nikhil Kumar Singh, Shweta Singh
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

The purpose of this paper is to improve the existing paradigm of edge computing to maintain a balanced energy usage.

The new greedy algorithm is proposed to balance the energy consumption in edge computing.

The new greedy algorithm can balance energy more efficiently than the random approach by an average of 66.59 percent.

The results are shown in this paper which are better as compared to existing algorithms.

]]>
Energy efficient multi-tasking for edge computing using federated learning10.1108/IJPCC-03-2022-0106International Journal of Pervasive Computing and Communications2022-07-08© 2022 Emerald Publishing LimitedMukesh SoniNihar Ranjan NayakAshima KalraSheshang DegadwalaNikhil Kumar SinghShweta SinghInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-07-0810.1108/IJPCC-03-2022-0106https://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0106/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Federated learning algorithm based on matrix mapping for data privacy over edge computinghttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0113/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThis paper aims to provide the security and privacy for Byzantine clients from different types of attacks. In this paper, the authors use Federated Learning Algorithm Based On Matrix Mapping For Data Privacy over Edge Computing. By using Softmax layer probability distribution for model byzantine tolerance can be increased from 40% to 45% in the blocking-convergence attack, and the edge backdoor attack can be stopped. By using Softmax layer probability distribution for model the results of the tests, the aggregation method can protect at least 30% of Byzantine clients.Federated learning algorithm based on matrix mapping for data privacy over edge computing
Pradyumna Kumar Tripathy, Anurag Shrivastava, Varsha Agarwal, Devangkumar Umakant Shah, Chandra Sekhar Reddy L., S.V. Akilandeeswari
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

This paper aims to provide the security and privacy for Byzantine clients from different types of attacks.

In this paper, the authors use Federated Learning Algorithm Based On Matrix Mapping For Data Privacy over Edge Computing.

By using Softmax layer probability distribution for model byzantine tolerance can be increased from 40% to 45% in the blocking-convergence attack, and the edge backdoor attack can be stopped.

By using Softmax layer probability distribution for model the results of the tests, the aggregation method can protect at least 30% of Byzantine clients.

]]>
Federated learning algorithm based on matrix mapping for data privacy over edge computing10.1108/IJPCC-03-2022-0113International Journal of Pervasive Computing and Communications2022-07-14© 2022 Emerald Publishing LimitedPradyumna Kumar TripathyAnurag ShrivastavaVarsha AgarwalDevangkumar Umakant ShahChandra Sekhar Reddy L.S.V. AkilandeeswariInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-07-1410.1108/IJPCC-03-2022-0113https://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0113/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
A novel federated learning based lightweight sustainable IoT approach to identify abnormal traffichttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0119/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThis strategy significantly reduces the computational overhead and storage overhead required when using the kernel density estimation method to calculate the abnormal evaluation value of the test sample. To effectively deal with the security threats of botnets to the home and personal Internet of Things (IoT), especially for the objective problem of insufficient resources for anomaly detection in the home environment, a novel kernel density estimation-based federated learning-based lightweight Internet of Things anomaly traffic detection based on nuclear density estimation (KDE-LIATD) method. First, the KDE-LIATD method uses Gaussian kernel density estimation method to estimate every normal sample in the training set. The eigenvalue probability density function of the dimensional feature and the corresponding probability density; then, a feature selection algorithm based on kernel density estimation, obtained features that make outstanding contributions to anomaly detection, thereby reducing the feature dimension while improving the accuracy of anomaly detection; finally, the anomaly evaluation value of the test sample is calculated by the cubic spine interpolation method and anomaly detection is performed. The simulation experiment results show that the proposed KDE-LIATD method is relatively strong in the detection of abnormal traffic for heterogeneous IoT devices. With its robustness and compatibility, it can effectively detect abnormal traffic of household and personal IoT botnets.A novel federated learning based lightweight sustainable IoT approach to identify abnormal traffic
Yasser Alharbi
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

This strategy significantly reduces the computational overhead and storage overhead required when using the kernel density estimation method to calculate the abnormal evaluation value of the test sample.

To effectively deal with the security threats of botnets to the home and personal Internet of Things (IoT), especially for the objective problem of insufficient resources for anomaly detection in the home environment, a novel kernel density estimation-based federated learning-based lightweight Internet of Things anomaly traffic detection based on nuclear density estimation (KDE-LIATD) method. First, the KDE-LIATD method uses Gaussian kernel density estimation method to estimate every normal sample in the training set. The eigenvalue probability density function of the dimensional feature and the corresponding probability density; then, a feature selection algorithm based on kernel density estimation, obtained features that make outstanding contributions to anomaly detection, thereby reducing the feature dimension while improving the accuracy of anomaly detection; finally, the anomaly evaluation value of the test sample is calculated by the cubic spine interpolation method and anomaly detection is performed.

The simulation experiment results show that the proposed KDE-LIATD method is relatively strong in the detection of abnormal traffic for heterogeneous IoT devices.

With its robustness and compatibility, it can effectively detect abnormal traffic of household and personal IoT botnets.

]]>
A novel federated learning based lightweight sustainable IoT approach to identify abnormal traffic10.1108/IJPCC-03-2022-0119International Journal of Pervasive Computing and Communications2022-06-10© 2022 Emerald Publishing LimitedYasser AlharbiInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-06-1010.1108/IJPCC-03-2022-0119https://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0119/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
A novel Internet of Things and federated learning-based privacy protection in blockchain technologyhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0123/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestBe that as it may, BC is computationally costly, has restricted versatility and brings about critical transmission capacity upward and postpones, those seems not to be fit with Internet of Things (IoT) setting. A lightweight scalable blockchain (LSB) which is improved toward IoT necessities is suggested by the authors and investigates LSB within brilliant house setup like an agent model to enable more extensive IoT apps. Less asset gadgets inside brilliant house advantage via any unified chief which lays out common units for correspondence also cycles generally approaching and active solicitations. Federated learning and blockchain (BC) have drawn in huge consideration due to the unchanging property and the relevant safety measure and protection benefits. FL and IoT safety measures’ difficulties can be conquered possibly by BC. LSB accomplishes fragmentation through shaping any overlaid web with more asset gadgets mutually deal with a public BC and federated learning which assures complete protection also security. This overlaid is coordinated as without error bunches and reduces extra efforts, also batch leader will be with answer to handle commonly known BCs. LSB joins some of advancements which also includes computations related to lesser weighing agreement, optimal belief also throughput regulatory body.A novel Internet of Things and federated learning-based privacy protection in blockchain technology
Shoayee Dlaim Alotaibi
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Be that as it may, BC is computationally costly, has restricted versatility and brings about critical transmission capacity upward and postpones, those seems not to be fit with Internet of Things (IoT) setting. A lightweight scalable blockchain (LSB) which is improved toward IoT necessities is suggested by the authors and investigates LSB within brilliant house setup like an agent model to enable more extensive IoT apps. Less asset gadgets inside brilliant house advantage via any unified chief which lays out common units for correspondence also cycles generally approaching and active solicitations.

Federated learning and blockchain (BC) have drawn in huge consideration due to the unchanging property and the relevant safety measure and protection benefits. FL and IoT safety measures’ difficulties can be conquered possibly by BC.

LSB accomplishes fragmentation through shaping any overlaid web with more asset gadgets mutually deal with a public BC and federated learning which assures complete protection also security.

This overlaid is coordinated as without error bunches and reduces extra efforts, also batch leader will be with answer to handle commonly known BCs. LSB joins some of advancements which also includes computations related to lesser weighing agreement, optimal belief also throughput regulatory body.

]]>
A novel Internet of Things and federated learning-based privacy protection in blockchain technology10.1108/IJPCC-03-2022-0123International Journal of Pervasive Computing and Communications2022-08-10© 2022 Emerald Publishing LimitedShoayee Dlaim AlotaibiInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-08-1010.1108/IJPCC-03-2022-0123https://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0123/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Development of cloud selection supporting model for green information and communication technology serviceshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0127/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe purpose of the study is to develop a cloud supporting model for green computing. In today's contemporary world, information technology (IT) plays a significant role. Because of the rapid growth of the IT business and the high level of greenhouse gas emissions, salient data centers are increasingly considering green IT techniques to reduce their environmental impacts. Both developing and underdeveloped countries are widely adopting green infrastructure and services over the cloud because of its cost-effectiveness, scalability and guaranteed high uptime. Several studies have investigated the fact that cloud computing provides beyond green information and communication technology (ICT) services and solutions. Therefore, anything offered over clouds also needs to be green to reduce the adverse influence on the environment. This paper examines the rationale for the use of green ICT in higher education and finds crucial success variables for the implementation of green ICT on the basis of an analysis of chosen educational organizations and interviews with key academic experts from the Universities of Ethiopia, in general, and BuleHora University, in particular. Finally, this paper described the design and development of a green cloud selection supporting model for green ICTs in higher educational institutions that helps cloud service customers choose the most green cloud-based ICT products as well as services. This study may be a significant source of new information for green ICT design and implementation in higher education institutions to preserve the environment and its impact on human life.Development of cloud selection supporting model for green information and communication technology services
Sanjiv Rao Godla, Jara Muda Haro, S.V.V.S.N. Murty Ch, R.V.V. Krishna
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

The purpose of the study is to develop a cloud supporting model for green computing. In today's contemporary world, information technology (IT) plays a significant role. Because of the rapid growth of the IT business and the high level of greenhouse gas emissions, salient data centers are increasingly considering green IT techniques to reduce their environmental impacts. Both developing and underdeveloped countries are widely adopting green infrastructure and services over the cloud because of its cost-effectiveness, scalability and guaranteed high uptime. Several studies have investigated the fact that cloud computing provides beyond green information and communication technology (ICT) services and solutions. Therefore, anything offered over clouds also needs to be green to reduce the adverse influence on the environment.

This paper examines the rationale for the use of green ICT in higher education and finds crucial success variables for the implementation of green ICT on the basis of an analysis of chosen educational organizations and interviews with key academic experts from the Universities of Ethiopia, in general, and BuleHora University, in particular.

Finally, this paper described the design and development of a green cloud selection supporting model for green ICTs in higher educational institutions that helps cloud service customers choose the most green cloud-based ICT products as well as services.

This study may be a significant source of new information for green ICT design and implementation in higher education institutions to preserve the environment and its impact on human life.

]]>
Development of cloud selection supporting model for green information and communication technology services10.1108/IJPCC-03-2022-0127International Journal of Pervasive Computing and Communications2022-07-21© 2022 Emerald Publishing LimitedSanjiv Rao GodlaJara Muda HaroS.V.V.S.N. Murty ChR.V.V. KrishnaInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-07-2110.1108/IJPCC-03-2022-0127https://www.emerald.com/insight/content/doi/10.1108/IJPCC-03-2022-0127/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Enhanced gray wolf optimization for estimation of time difference of arrival in WSNshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-05-2022-0181/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestIntelligent prediction of node localization in wireless sensor networks (WSNs) is a major concern for researchers. The huge amount of data generated by modern sensor array systems required computationally efficient calibration techniques. This paper aims to improve localization accuracy by identifying obstacles in the optimization process and network scenarios. The proposed method is used to incorporate distance estimation between nodes and packet transmission hop counts. This estimation is used in the proposed support vector machine (SVM) to find the network path using a time difference of arrival (TDoA)-based SVM. However, if the data set is noisy, SVM is prone to poor optimization, which leads to overlapping of target classes and the pathways through TDoA. The enhanced gray wolf optimization (EGWO) technique is introduced to eliminate overlapping target classes in the SVM. The performance and efficacy of the model using existing TDoA methodologies are analyzed. The simulation results show that the proposed TDoA-EGWO achieves a higher rate of detection efficiency of 98% and control overhead of 97.8% and a better packet delivery ratio than other traditional methods. The proposed method is successful in detecting the unknown position of the sensor node with a detection rate greater than that of other methods.Enhanced gray wolf optimization for estimation of time difference of arrival in WSNs
Devika E., Saravanan A.
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Intelligent prediction of node localization in wireless sensor networks (WSNs) is a major concern for researchers. The huge amount of data generated by modern sensor array systems required computationally efficient calibration techniques. This paper aims to improve localization accuracy by identifying obstacles in the optimization process and network scenarios.

The proposed method is used to incorporate distance estimation between nodes and packet transmission hop counts. This estimation is used in the proposed support vector machine (SVM) to find the network path using a time difference of arrival (TDoA)-based SVM. However, if the data set is noisy, SVM is prone to poor optimization, which leads to overlapping of target classes and the pathways through TDoA. The enhanced gray wolf optimization (EGWO) technique is introduced to eliminate overlapping target classes in the SVM.

The performance and efficacy of the model using existing TDoA methodologies are analyzed. The simulation results show that the proposed TDoA-EGWO achieves a higher rate of detection efficiency of 98% and control overhead of 97.8% and a better packet delivery ratio than other traditional methods.

The proposed method is successful in detecting the unknown position of the sensor node with a detection rate greater than that of other methods.

]]>
Enhanced gray wolf optimization for estimation of time difference of arrival in WSNs10.1108/IJPCC-05-2022-0181International Journal of Pervasive Computing and Communications2022-08-30© 2020 Emerald Publishing LimitedDevika E.Saravanan A.International Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-08-3010.1108/IJPCC-05-2022-0181https://www.emerald.com/insight/content/doi/10.1108/IJPCC-05-2022-0181/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2020 Emerald Publishing Limited
Federate learning on Web browsing data with statically and machine learning techniquehttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-05-2022-0184/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestFederation analytics approaches are a present area of study that has already progressed beyond the analysis of metrics and counts. It is possible to acquire aggregated information about on-device data by training machine learning models using federated learning techniques without any of the raw data ever having to leave the devices in the issue. Web browser forensics research has been focused on individual Web browsers or architectural analysis of specific log files rather than on broad topics. This paper aims to propose major tools used for Web browser analysis. Each kind of Web browser has its own unique set of features. This allows the user to choose their preferred browsers or to check out many browsers at once. If a forensic examiner has access to just one Web browser's log files, he/she makes it difficult to determine which sites a person has visited. The agent must thus be capable of analyzing all currently available Web browsers on a single workstation and doing an integrated study of various Web browsers. Federated learning has emerged as a training paradigm in such settings. Web browser forensics research in general has focused on certain browsers or the computational modeling of specific log files. Internet users engage in a wide range of activities using an internet browser, such as searching for information and sending e-mails. It is also essential that the investigator have access to user activity when conducting an inquiry. This data, which may be used to assess information retrieval activities, is very critical. In this paper, the authors purposed a major tool used for Web browser analysis. This study's proposed algorithm is capable of protecting data privacy effectively in real-world experiments.Federate learning on Web browsing data with statically and machine learning technique
Ratnmala Nivrutti Bhimanpallewar, Sohail Imran Khan, K. Bhavana Raj, Kamal Gulati, Narinder Bhasin, Roop Raj
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Federation analytics approaches are a present area of study that has already progressed beyond the analysis of metrics and counts. It is possible to acquire aggregated information about on-device data by training machine learning models using federated learning techniques without any of the raw data ever having to leave the devices in the issue. Web browser forensics research has been focused on individual Web browsers or architectural analysis of specific log files rather than on broad topics. This paper aims to propose major tools used for Web browser analysis.

Each kind of Web browser has its own unique set of features. This allows the user to choose their preferred browsers or to check out many browsers at once. If a forensic examiner has access to just one Web browser's log files, he/she makes it difficult to determine which sites a person has visited. The agent must thus be capable of analyzing all currently available Web browsers on a single workstation and doing an integrated study of various Web browsers.

Federated learning has emerged as a training paradigm in such settings. Web browser forensics research in general has focused on certain browsers or the computational modeling of specific log files. Internet users engage in a wide range of activities using an internet browser, such as searching for information and sending e-mails.

It is also essential that the investigator have access to user activity when conducting an inquiry. This data, which may be used to assess information retrieval activities, is very critical. In this paper, the authors purposed a major tool used for Web browser analysis. This study's proposed algorithm is capable of protecting data privacy effectively in real-world experiments.

]]>
Federate learning on Web browsing data with statically and machine learning technique10.1108/IJPCC-05-2022-0184International Journal of Pervasive Computing and Communications2022-08-22© 2022 Emerald Publishing LimitedRatnmala Nivrutti BhimanpallewarSohail Imran KhanK. Bhavana RajKamal GulatiNarinder BhasinRoop RajInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-08-2210.1108/IJPCC-05-2022-0184https://www.emerald.com/insight/content/doi/10.1108/IJPCC-05-2022-0184/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
LMH-RPL: a load balancing and mobility aware secure hybrid routing protocol for low power lossy networkhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-05-2022-0213/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestRouting protocol for low-power lossy network (RPL) being the de facto routing protocol used by low power lossy networks needs to provide adequate routing service to mobile nodes (MNs) in the network. As RPL is designed to work under constraint power requirements, its route updating frequency is not sufficient for MNs in the network. The purpose of this study is to ensure that MNs enjoy seamless connection throughout the network with minimal handover delay. This study proposes a load balancing mobility aware secure hybrid – RPL in which static node (SN) identifies route using metrics like expected transmission count, and path delay and parent selection are further refined by working on remaining energy for identifying the primary route and queue availability for secondary route maintenance. MNs identify route with the help of smart timers and by using received signal strength indicator sampling of parent and neighbor nodes. In this work, MNs are also secured against rank attack in RPL. This model produces favorable result in terms of packet delivery ratio, delay, energy consumption and number of living nodes in the network when compared with different RPL protocols with mobility support. The proposed model reduces packet retransmission in the network by a large margin by providing load balancing to SNs and seamless connection to MNs. In this work, a novel algorithm was developed to provide seamless handover for MNs in network. Suitable technique was developed to provide load balancing to SNs in network by maintaining appropriate secondary route.LMH-RPL: a load balancing and mobility aware secure hybrid routing protocol for low power lossy network
Robin Cyriac, Saleem Durai M.A.
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Routing protocol for low-power lossy network (RPL) being the de facto routing protocol used by low power lossy networks needs to provide adequate routing service to mobile nodes (MNs) in the network. As RPL is designed to work under constraint power requirements, its route updating frequency is not sufficient for MNs in the network. The purpose of this study is to ensure that MNs enjoy seamless connection throughout the network with minimal handover delay.

This study proposes a load balancing mobility aware secure hybrid – RPL in which static node (SN) identifies route using metrics like expected transmission count, and path delay and parent selection are further refined by working on remaining energy for identifying the primary route and queue availability for secondary route maintenance. MNs identify route with the help of smart timers and by using received signal strength indicator sampling of parent and neighbor nodes. In this work, MNs are also secured against rank attack in RPL.

This model produces favorable result in terms of packet delivery ratio, delay, energy consumption and number of living nodes in the network when compared with different RPL protocols with mobility support. The proposed model reduces packet retransmission in the network by a large margin by providing load balancing to SNs and seamless connection to MNs.

In this work, a novel algorithm was developed to provide seamless handover for MNs in network. Suitable technique was developed to provide load balancing to SNs in network by maintaining appropriate secondary route.

]]>
LMH-RPL: a load balancing and mobility aware secure hybrid routing protocol for low power lossy network10.1108/IJPCC-05-2022-0213International Journal of Pervasive Computing and Communications2022-09-20© 2022 Emerald Publishing LimitedRobin CyriacSaleem Durai M.A.International Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-09-2010.1108/IJPCC-05-2022-0213https://www.emerald.com/insight/content/doi/10.1108/IJPCC-05-2022-0213/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
A pervasive health care device computing application for brain tumors with machine and deep learning techniqueshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2021-0137/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestPervasive health-care computing applications in medical field provide better diagnosis of various organs such as brain, spinal card, heart, lungs and so on. The purpose of this study is to find brain tumor diagnosis using Machine learning (ML) and Deep Learning(DL) techniques. The brain diagnosis process is an important task to medical research which is the most prominent step for providing the treatment to patient. Therefore, it is important to have high accuracy of diagnosis rate so that patients easily get treatment from medical consult. There are many earlier investigations on this research work to diagnose brain diseases. Moreover, it is necessary to improve the performance measures using deep and ML approaches. In this paper, various brain disorders diagnosis applications are differentiated through following implemented techniques. These techniques are computed through segment and classify the brain magnetic resonance imaging or computerized tomography images clearly. The adaptive median, convolution neural network, gradient boosting machine learning (GBML) and improved support vector machine health-care applications are the advance methods used to extract the hidden features and providing the medical information for diagnosis. The proposed design is implemented on Python 3.7.8 software for simulation analysis. This research is getting more help for investigators, diagnosis centers and doctors. In each and every model, performance measures are to be taken for estimating the application performance. The measures such as accuracy, sensitivity, recall, F1 score, peak-to-signal noise ratio and correlation coefficient have been estimated using proposed methodology. moreover these metrics are providing high improvement compared to earlier models. The implemented deep and ML designs get outperformance the methodologies and proving good application successive score.A pervasive health care device computing application for brain tumors with machine and deep learning techniques
Sreelakshmi D., Syed Inthiyaz
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Pervasive health-care computing applications in medical field provide better diagnosis of various organs such as brain, spinal card, heart, lungs and so on. The purpose of this study is to find brain tumor diagnosis using Machine learning (ML) and Deep Learning(DL) techniques. The brain diagnosis process is an important task to medical research which is the most prominent step for providing the treatment to patient. Therefore, it is important to have high accuracy of diagnosis rate so that patients easily get treatment from medical consult. There are many earlier investigations on this research work to diagnose brain diseases. Moreover, it is necessary to improve the performance measures using deep and ML approaches.

In this paper, various brain disorders diagnosis applications are differentiated through following implemented techniques. These techniques are computed through segment and classify the brain magnetic resonance imaging or computerized tomography images clearly. The adaptive median, convolution neural network, gradient boosting machine learning (GBML) and improved support vector machine health-care applications are the advance methods used to extract the hidden features and providing the medical information for diagnosis. The proposed design is implemented on Python 3.7.8 software for simulation analysis.

This research is getting more help for investigators, diagnosis centers and doctors. In each and every model, performance measures are to be taken for estimating the application performance. The measures such as accuracy, sensitivity, recall, F1 score, peak-to-signal noise ratio and correlation coefficient have been estimated using proposed methodology. moreover these metrics are providing high improvement compared to earlier models.

The implemented deep and ML designs get outperformance the methodologies and proving good application successive score.

]]>
A pervasive health care device computing application for brain tumors with machine and deep learning techniques10.1108/IJPCC-06-2021-0137International Journal of Pervasive Computing and Communications2021-12-07© 2021 Emerald Publishing LimitedSreelakshmi D.Syed InthiyazInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2021-12-0710.1108/IJPCC-06-2021-0137https://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2021-0137/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2021 Emerald Publishing Limited
Sentiment analysis in aspect term extraction for mobile phone tweets using machine learning techniqueshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2021-0143/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThis paper aims to focus on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multiple aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets. In the aspect-based sentiment analysis aspect, term extraction is one of the key challenges where different aspects are extracted from online user-generated content. This study focuses on customer tweets/reviews on different mobile products which is an important form of opinionated content by looking at different aspects. Different deep learning techniques are used to extract all aspects from customer tweets which are extracted using Twitter API. The comparison of the results with traditional machine learning methods such as random forest algorithm, K-nearest neighbour and support vector machine using two data sets iPhone tweets and Samsung tweets have been presented for better accuracy. In this paper, the authors have focused on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multi-aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets.Sentiment analysis in aspect term extraction for mobile phone tweets using machine learning techniques
Venkatesh Naramula, Kalaivania A.
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

This paper aims to focus on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multiple aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets.

In the aspect-based sentiment analysis aspect, term extraction is one of the key challenges where different aspects are extracted from online user-generated content. This study focuses on customer tweets/reviews on different mobile products which is an important form of opinionated content by looking at different aspects. Different deep learning techniques are used to extract all aspects from customer tweets which are extracted using Twitter API.

The comparison of the results with traditional machine learning methods such as random forest algorithm, K-nearest neighbour and support vector machine using two data sets iPhone tweets and Samsung tweets have been presented for better accuracy.

In this paper, the authors have focused on extracting aspect terms on mobile phone (iPhone and Samsung) tweets using NLTK techniques on multi-aspect extraction is one of the challenges. Then, also machine learning techniques are used that can be trained on supervised strategies to predict and classify sentiment present in mobile phone tweets. This paper also presents the proposed architecture for the extraction of aspect terms and sentiment polarity from customer tweets.

]]>
Sentiment analysis in aspect term extraction for mobile phone tweets using machine learning techniques10.1108/IJPCC-06-2021-0143International Journal of Pervasive Computing and Communications2021-10-18© 2021 Emerald Publishing LimitedVenkatesh NaramulaKalaivania A.International Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2021-10-1810.1108/IJPCC-06-2021-0143https://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2021-0143/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2021 Emerald Publishing Limited
A lightweight and flexible mutual authentication and key agreement protocol for wearable sensing devices in WBANhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2021-0144/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestA wireless body area network (WBAN) is a collection of sensing devices attached to a person’s body that is typically used during health care to track their physical state. This paper aims to study the security challenges and various attacks that occurred while transferring a person’s sensitive medical diagnosis information in WBAN. This technology has significantly gained prominence in the medical field. These wearable sensors are transferring information to doctors, and there are numerous possibilities for an intruder to pose as a doctor and obtain information about the patient’s vital information. As a result, mutual authentication and session key negotiations are critical security challenges for wearable sensing devices in WBAN. This work proposes an improved mutual authentication and key agreement protocol for wearable sensing devices in WBAN. The existing related schemes require more computational and storage requirements, but the proposed method provides a flexible solution with less complexity. As sensor devices are resource-constrained, proposed approach only makes use of cryptographic hash-functions and bit-wise XOR operations, hence it is lightweight and flexible. The protocol’s security is validated using the AVISPA tool, and it will withstand various security attacks. The proposed protocol’s simulation and performance analysis are compared to current relevant schemes and show that it produces efficient outcomes. This technology has significantly gained prominence in the medical sector. These sensing devises transmit information to doctors, and there are possibilities for an intruder to pose as a doctor and obtain information about the patient’s vital information. Hence, this paper proposes a lightweight and flexible protocol for mutual authentication and key agreement for wearable sensing devices in WBAN only makes use of cryptographic hash-functions and bit-wise XOR operations. The proposed protocol is simulated using AVISPA tool and its performance is better compared to the existing methods. This paper proposes a novel improved mutual authentication and key-agreement protocol for wearable sensing devices in WBAN.A lightweight and flexible mutual authentication and key agreement protocol for wearable sensing devices in WBAN
Sandeep Kumar Reddy Thota, C. Mala, Geetha Krishnan
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

A wireless body area network (WBAN) is a collection of sensing devices attached to a person’s body that is typically used during health care to track their physical state. This paper aims to study the security challenges and various attacks that occurred while transferring a person’s sensitive medical diagnosis information in WBAN.

This technology has significantly gained prominence in the medical field. These wearable sensors are transferring information to doctors, and there are numerous possibilities for an intruder to pose as a doctor and obtain information about the patient’s vital information. As a result, mutual authentication and session key negotiations are critical security challenges for wearable sensing devices in WBAN. This work proposes an improved mutual authentication and key agreement protocol for wearable sensing devices in WBAN. The existing related schemes require more computational and storage requirements, but the proposed method provides a flexible solution with less complexity.

As sensor devices are resource-constrained, proposed approach only makes use of cryptographic hash-functions and bit-wise XOR operations, hence it is lightweight and flexible. The protocol’s security is validated using the AVISPA tool, and it will withstand various security attacks. The proposed protocol’s simulation and performance analysis are compared to current relevant schemes and show that it produces efficient outcomes.

This technology has significantly gained prominence in the medical sector. These sensing devises transmit information to doctors, and there are possibilities for an intruder to pose as a doctor and obtain information about the patient’s vital information. Hence, this paper proposes a lightweight and flexible protocol for mutual authentication and key agreement for wearable sensing devices in WBAN only makes use of cryptographic hash-functions and bit-wise XOR operations. The proposed protocol is simulated using AVISPA tool and its performance is better compared to the existing methods. This paper proposes a novel improved mutual authentication and key-agreement protocol for wearable sensing devices in WBAN.

]]>
A lightweight and flexible mutual authentication and key agreement protocol for wearable sensing devices in WBAN10.1108/IJPCC-06-2021-0144International Journal of Pervasive Computing and Communications2022-01-14© 2021 Emerald Publishing LimitedSandeep Kumar Reddy ThotaC. MalaGeetha KrishnanInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-01-1410.1108/IJPCC-06-2021-0144https://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2021-0144/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2021 Emerald Publishing Limited
Enhanced cipher text-policy attribute-based encryption and serialization on media cloud datahttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0223/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThere are various system techniques or models which are used for access control by performing cryptographic operations and characterizing to provide an efficient cloud and in Internet of Things (IoT) access control. Particularly in cloud computing environment, there is a large-scale distribution of these traditional symmetric cryptographic techniques. These symmetric cryptographic techniques use the same key for encryption and decryption processes. However, during the execution of these phases, they are under the problems of key distribution and management. The purpose of this study is to provide efficient key management and key distribution in cloud computing environment. This paper uses the Cipher text-Policy Attribute-Based Encryption (CP-ABE) technique with proper access control policy which is used to provide the data owner’s control and share the data through encryption process in Cloud and IoT environment. The data are shared with the the help of cloud storage, even in presence of authorized users. The main method used in this research is Enhanced CP-ABE Serialization (E-CP-ABES) approach. The results are measured by means of encryption, completion and decryption time that showed better results when compared with the existing CP-ABE technique. The comparative analysis has showed that the proposed E-CP-ABES has obtained better results of 2373 ms for completion time for 256 key lengths, whereas the existing CP-ABE has obtained 3129 ms of completion time. In addition to this, the existing Advanced Encryption Standard (AES) scheme showed 3449 ms of completion time. The proposed research work uses an E-CP-ABES access control technique that verifies the hidden attributes having a very sensitive dataset constraint and provides solution to the key management problem and access control mechanism existing in IOT and cloud computing environment. The novelty of the research is that the proposed E-CP-ABES incorporates extensible, partially hidden constraint policy by using a process known as serialization procedure and it serializes to a byte stream. Redundant residue number system is considered to remove errors that occur during the processing of bits or data obtained from the serialization. The data stream is recovered using the Deserialization process.Enhanced cipher text-policy attribute-based encryption and serialization on media cloud data
Mohan Naik R., H. Manoj T. Gadiyar, Sharath S. M., M. Bharathrajkumar, Sowmya T. K.
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

There are various system techniques or models which are used for access control by performing cryptographic operations and characterizing to provide an efficient cloud and in Internet of Things (IoT) access control. Particularly in cloud computing environment, there is a large-scale distribution of these traditional symmetric cryptographic techniques. These symmetric cryptographic techniques use the same key for encryption and decryption processes. However, during the execution of these phases, they are under the problems of key distribution and management. The purpose of this study is to provide efficient key management and key distribution in cloud computing environment.

This paper uses the Cipher text-Policy Attribute-Based Encryption (CP-ABE) technique with proper access control policy which is used to provide the data owner’s control and share the data through encryption process in Cloud and IoT environment. The data are shared with the the help of cloud storage, even in presence of authorized users. The main method used in this research is Enhanced CP-ABE Serialization (E-CP-ABES) approach.

The results are measured by means of encryption, completion and decryption time that showed better results when compared with the existing CP-ABE technique. The comparative analysis has showed that the proposed E-CP-ABES has obtained better results of 2373 ms for completion time for 256 key lengths, whereas the existing CP-ABE has obtained 3129 ms of completion time. In addition to this, the existing Advanced Encryption Standard (AES) scheme showed 3449 ms of completion time.

The proposed research work uses an E-CP-ABES access control technique that verifies the hidden attributes having a very sensitive dataset constraint and provides solution to the key management problem and access control mechanism existing in IOT and cloud computing environment. The novelty of the research is that the proposed E-CP-ABES incorporates extensible, partially hidden constraint policy by using a process known as serialization procedure and it serializes to a byte stream. Redundant residue number system is considered to remove errors that occur during the processing of bits or data obtained from the serialization. The data stream is recovered using the Deserialization process.

]]>
Enhanced cipher text-policy attribute-based encryption and serialization on media cloud data10.1108/IJPCC-06-2022-0223International Journal of Pervasive Computing and Communications2022-10-05© 2022 Emerald Publishing LimitedMohan Naik R.H. Manoj T. GadiyarSharath S. M.M. BharathrajkumarSowmya T. K.International Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-10-0510.1108/IJPCC-06-2022-0223https://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0223/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Embedding and Siamese deep neural network-based malware detection in Internet of Thingshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0236/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestA proper understanding of malware characteristics is necessary to protect massive data generated because of the advances in Internet of Things (IoT), big data and the cloud. Because of the encryption techniques used by the attackers, network security experts struggle to develop an efficient malware detection technique. Though few machine learning-based techniques are used by researchers for malware detection, large amounts of data must be processed and detection accuracy needs to be improved for efficient malware detection. Deep learning-based methods have gained significant momentum in recent years for the accurate detection of malware. The purpose of this paper is to create an efficient malware detection system for the IoT using Siamese deep neural networks. In this work, a novel Siamese deep neural network system with an embedding vector is proposed. Siamese systems have generated significant interest because of their capacity to pick up a significant portion of the input. The proposed method is efficient in malware detection in the IoT because it learns from a few records to improve forecasts. The goal is to determine the evolution of malware similarity in emerging domains of technology. The cloud platform is used to perform experiments on the Malimg data set. ResNet50 was pretrained as a component of the subsystem that established embedding. Each system reviews a set of input documents to determine whether they belong to the same family. The results of the experiments show that the proposed method outperforms existing techniques in terms of accuracy and efficiency. The proposed work generates an embedding for each input. Each system examined a collection of data files to determine whether they belonged to the same family. Cosine proximity is also used to estimate the vector similarity in a high-dimensional area.Embedding and Siamese deep neural network-based malware detection in Internet of Things
T. Sree Lakshmi, M. Govindarajan, Asadi Srinivasulu
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

A proper understanding of malware characteristics is necessary to protect massive data generated because of the advances in Internet of Things (IoT), big data and the cloud. Because of the encryption techniques used by the attackers, network security experts struggle to develop an efficient malware detection technique. Though few machine learning-based techniques are used by researchers for malware detection, large amounts of data must be processed and detection accuracy needs to be improved for efficient malware detection. Deep learning-based methods have gained significant momentum in recent years for the accurate detection of malware. The purpose of this paper is to create an efficient malware detection system for the IoT using Siamese deep neural networks.

In this work, a novel Siamese deep neural network system with an embedding vector is proposed. Siamese systems have generated significant interest because of their capacity to pick up a significant portion of the input. The proposed method is efficient in malware detection in the IoT because it learns from a few records to improve forecasts. The goal is to determine the evolution of malware similarity in emerging domains of technology.

The cloud platform is used to perform experiments on the Malimg data set. ResNet50 was pretrained as a component of the subsystem that established embedding. Each system reviews a set of input documents to determine whether they belong to the same family. The results of the experiments show that the proposed method outperforms existing techniques in terms of accuracy and efficiency.

The proposed work generates an embedding for each input. Each system examined a collection of data files to determine whether they belonged to the same family. Cosine proximity is also used to estimate the vector similarity in a high-dimensional area.

]]>
Embedding and Siamese deep neural network-based malware detection in Internet of Things10.1108/IJPCC-06-2022-0236International Journal of Pervasive Computing and Communications2022-11-07© 2022 Emerald Publishing LimitedT. Sree LakshmiM. GovindarajanAsadi SrinivasuluInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-11-0710.1108/IJPCC-06-2022-0236https://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0236/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
ElGamal algorithm with hyperchaotic sequence to enhance security of cloud datahttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0240/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestIn the cloud-computing environment, privacy preservation and enabling security to the cloud data is a crucial and demanding task. In both the commercial and academic world, the privacy of important and sensitive data needs to be safeguarded from unauthorized users to improve its security. Therefore, several key generations, encryption and decryption algorithms are developed for data privacy preservation in the cloud environment. Still, the outsourced data remains with the problems like minimum data security, time consumption and increased computational complexity. The purpose of this research study is to develop an effective cryptosystem algorithm to secure the outsourced data with minimum computational complexity. A new cryptosystem algorithm is proposed in this paper to address the above-mentioned concerns. The introduced cryptosystem algorithm has combined the ElGamal algorithm and hyperchaotic sequence, which effectively encrypts the outsourced data and diminishes the computational complexity of the system. In the resulting section, the proposed improved ElGamal cryptosystem (IEC) algorithm performance is validated using the performance metrics like encryption time, execution time, decryption time and key generation comparison time. The IEC algorithm approximately reduced 0.08–1.786 ms of encryption and decryption time compared to the existing model: secure data deletion and verification. The IEC algorithm significantly enhances the data security in cloud environments by increasing the power of key pairs. In this manuscript, the conventional ElGamal algorithm is integrated with the pseudorandom sequences for a pseudorandom key generation for improving the outsourced cloud data security.ElGamal algorithm with hyperchaotic sequence to enhance security of cloud data
Aruna Kumari Koppaka, Vadlamani Naga Lakshmi
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

In the cloud-computing environment, privacy preservation and enabling security to the cloud data is a crucial and demanding task. In both the commercial and academic world, the privacy of important and sensitive data needs to be safeguarded from unauthorized users to improve its security. Therefore, several key generations, encryption and decryption algorithms are developed for data privacy preservation in the cloud environment. Still, the outsourced data remains with the problems like minimum data security, time consumption and increased computational complexity. The purpose of this research study is to develop an effective cryptosystem algorithm to secure the outsourced data with minimum computational complexity.

A new cryptosystem algorithm is proposed in this paper to address the above-mentioned concerns. The introduced cryptosystem algorithm has combined the ElGamal algorithm and hyperchaotic sequence, which effectively encrypts the outsourced data and diminishes the computational complexity of the system.

In the resulting section, the proposed improved ElGamal cryptosystem (IEC) algorithm performance is validated using the performance metrics like encryption time, execution time, decryption time and key generation comparison time. The IEC algorithm approximately reduced 0.08–1.786 ms of encryption and decryption time compared to the existing model: secure data deletion and verification.

The IEC algorithm significantly enhances the data security in cloud environments by increasing the power of key pairs. In this manuscript, the conventional ElGamal algorithm is integrated with the pseudorandom sequences for a pseudorandom key generation for improving the outsourced cloud data security.

]]>
ElGamal algorithm with hyperchaotic sequence to enhance security of cloud data10.1108/IJPCC-06-2022-0240International Journal of Pervasive Computing and Communications2022-10-13© 2022 Emerald Publishing LimitedAruna Kumari KoppakaVadlamani Naga LakshmiInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-10-1310.1108/IJPCC-06-2022-0240https://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0240/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Improving GPU performance in multimedia applications through FPGA based adaptive DMA controllerhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0241/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestDeep learning techniques are unavoidable in a variety of domains such as health care, computer vision, cyber-security and so on. These algorithms demand high data transfers but require bottlenecks in achieving the high speed and low latency synchronization while being implemented in the real hardware architectures. Though direct memory access controller (DMAC) has gained a brighter light of research for achieving bulk data transfers, existing direct memory access (DMA) systems continue to face the challenges of achieving high-speed communication. The purpose of this study is to develop an adaptive-configured DMA architecture for bulk data transfer with high throughput and less time-delayed computation. The proposed methodology consists of a heterogeneous computing system integrated with specialized hardware and software. For the hardware, the authors propose an field programmable gate array (FPGA)-based DMAC, which transfers the data to the graphics processing unit (GPU) using PCI-Express. The workload characterization technique is designed using Python software and is implementable for the advanced risk machine Cortex architecture with a suitable communication interface. This module offloads the input streams of data to the FPGA and initiates the FPGA for the control flow of data to the GPU that can achieve efficient processing. This paper presents an evaluation of a configurable workload-based DMA controller for collecting the data from the input devices and concurrently applying it to the GPU architecture, bypassing the hardware and software extraneous copies and bottlenecks via PCI Express. It also investigates the usage of adaptive DMA memory buffer allocation and workload characterization techniques. The proposed DMA architecture is compared with the other existing DMA architectures in which the performance of the proposed DMAC outperforms traditional DMA by achieving 96% throughput and 50% less latency synchronization. The proposed gated recurrent unit has produced 95.6% accuracy in characterization of the workloads into heavy, medium and normal. The proposed model has outperformed the other algorithms and proves its strength for workload characterization.Improving GPU performance in multimedia applications through FPGA based adaptive DMA controller
Santosh Kumar B., Krishna Kumar E.
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Deep learning techniques are unavoidable in a variety of domains such as health care, computer vision, cyber-security and so on. These algorithms demand high data transfers but require bottlenecks in achieving the high speed and low latency synchronization while being implemented in the real hardware architectures. Though direct memory access controller (DMAC) has gained a brighter light of research for achieving bulk data transfers, existing direct memory access (DMA) systems continue to face the challenges of achieving high-speed communication. The purpose of this study is to develop an adaptive-configured DMA architecture for bulk data transfer with high throughput and less time-delayed computation.

The proposed methodology consists of a heterogeneous computing system integrated with specialized hardware and software. For the hardware, the authors propose an field programmable gate array (FPGA)-based DMAC, which transfers the data to the graphics processing unit (GPU) using PCI-Express. The workload characterization technique is designed using Python software and is implementable for the advanced risk machine Cortex architecture with a suitable communication interface. This module offloads the input streams of data to the FPGA and initiates the FPGA for the control flow of data to the GPU that can achieve efficient processing.

This paper presents an evaluation of a configurable workload-based DMA controller for collecting the data from the input devices and concurrently applying it to the GPU architecture, bypassing the hardware and software extraneous copies and bottlenecks via PCI Express. It also investigates the usage of adaptive DMA memory buffer allocation and workload characterization techniques. The proposed DMA architecture is compared with the other existing DMA architectures in which the performance of the proposed DMAC outperforms traditional DMA by achieving 96% throughput and 50% less latency synchronization.

The proposed gated recurrent unit has produced 95.6% accuracy in characterization of the workloads into heavy, medium and normal. The proposed model has outperformed the other algorithms and proves its strength for workload characterization.

]]>
Improving GPU performance in multimedia applications through FPGA based adaptive DMA controller10.1108/IJPCC-06-2022-0241International Journal of Pervasive Computing and Communications2022-10-17© 2022 Emerald Publishing LimitedSantosh Kumar B.Krishna Kumar E.International Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-10-1710.1108/IJPCC-06-2022-0241https://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0241/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
A secure IoT and edge computing based EV selection model in V2G systems using ant colony optimization algorithmhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0245/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe fluctuations that occurred between the power requirements have shown a higher range of voltage regulations and frequency. The fluctuations are caused because of substantial changes in the energy dissipation. The operational efficiency has been reduced when the power grid is enabled with the help of electric vehicles (EVs) that were created by the power resources. The model showed an active load matching for regulating the power and there occurred a harmonic motion in energy. The main purpose of the proposed research is to handle the energy sources for stabilization which has increased the reliability and improved the power efficiency. This study or paper aims to elaborate the security and privacy challenges present in the vehicle 2 grid (V2G) network and their impact with grid resilience. The smart framework is proposed which works based on Internet of Things and edge computations that managed to perform an effective V2G operation. Thus, an optimum model for scheduling the charge is designed on each EV to maximize the number of users and selecting the best EV using the proposed ant colony optimization (ACO). At the first, the constructive phase of ACO where the ants in the colony generate the feasible solutions. The constructive phase with local search generates an ACO algorithm that uses the heterogeneous colony of ants and finds effectively the best-known solutions widely to overcome the problem. The results obtained by the existing in-circuit serial programming-plug-in electric vehicles model in terms of power usage ranged from 0.94 to 0.96 kWh which was lower when compared to the proposed ACO that showed power usage of 0.995 to 0.939 kWh, respectively, with time. The results showed that the energy aware routed with ACO provided feasible routing solutions for the source node that provided the sensor network at its lifetime and security at the time of authentication. The proposed ACO is aware of energy routing protocol that has been analyzed and compared with the energy utilization with respect to the sensor area network which uses power resources effectively.A secure IoT and edge computing based EV selection model in V2G systems using ant colony optimization algorithm
Gopinath Anjinappa, Divakar Bangalore Prabhakar
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

The fluctuations that occurred between the power requirements have shown a higher range of voltage regulations and frequency. The fluctuations are caused because of substantial changes in the energy dissipation. The operational efficiency has been reduced when the power grid is enabled with the help of electric vehicles (EVs) that were created by the power resources. The model showed an active load matching for regulating the power and there occurred a harmonic motion in energy. The main purpose of the proposed research is to handle the energy sources for stabilization which has increased the reliability and improved the power efficiency. This study or paper aims to elaborate the security and privacy challenges present in the vehicle 2 grid (V2G) network and their impact with grid resilience.

The smart framework is proposed which works based on Internet of Things and edge computations that managed to perform an effective V2G operation. Thus, an optimum model for scheduling the charge is designed on each EV to maximize the number of users and selecting the best EV using the proposed ant colony optimization (ACO). At the first, the constructive phase of ACO where the ants in the colony generate the feasible solutions. The constructive phase with local search generates an ACO algorithm that uses the heterogeneous colony of ants and finds effectively the best-known solutions widely to overcome the problem.

The results obtained by the existing in-circuit serial programming-plug-in electric vehicles model in terms of power usage ranged from 0.94 to 0.96 kWh which was lower when compared to the proposed ACO that showed power usage of 0.995 to 0.939 kWh, respectively, with time. The results showed that the energy aware routed with ACO provided feasible routing solutions for the source node that provided the sensor network at its lifetime and security at the time of authentication.

The proposed ACO is aware of energy routing protocol that has been analyzed and compared with the energy utilization with respect to the sensor area network which uses power resources effectively.

]]>
A secure IoT and edge computing based EV selection model in V2G systems using ant colony optimization algorithm10.1108/IJPCC-06-2022-0245International Journal of Pervasive Computing and Communications2022-09-21© 2022 Emerald Publishing LimitedGopinath AnjinappaDivakar Bangalore PrabhakarInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-09-2110.1108/IJPCC-06-2022-0245https://www.emerald.com/insight/content/doi/10.1108/IJPCC-06-2022-0245/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Classification of disordered patient’s voice by using pervasive computational algorithmshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-07-2021-0158/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestCurrently, the design, technological features of voices, and their analysis of various applications are being simulated with the requirement to communicate at a greater distance or more discreetly. The purpose of this study is to explore how voices and their analyses are used in modern literature to generate a variety of solutions, of which only a few successful models exist. The mel-frequency cepstral coefficient (MFCC), average magnitude difference function, cepstrum analysis and other voice characteristics are effectively modeled and implemented using mathematical modeling with variable weights parametric for each algorithm, which can be used with or without noises. Improvising the design characteristics and their weights with different supervised algorithms that regulate the design model simulation. Different data models have been influenced by the parametric range and solution analysis in different space parameters, such as frequency or time model, with features such as without, with and after noise reduction. The frequency response of the current design can be analyzed through the Windowing techniques. A new model and its implementation scenario with pervasive computational algorithms’ (PCA) (such as the hybrid PCA with AdaBoost (HPCA), PCA with bag of features and improved PCA with bag of features) relating the different features such as MFCC, power spectrum, pitch, Window techniques, etc. are calculated using the HPCA. The features are accumulated on the matrix formulations and govern the design feature comparison and its feature classification for improved performance parameters, as mentioned in the results.Classification of disordered patient’s voice by using pervasive computational algorithms
Anil Kumar Maddali, Habibulla Khan
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Currently, the design, technological features of voices, and their analysis of various applications are being simulated with the requirement to communicate at a greater distance or more discreetly. The purpose of this study is to explore how voices and their analyses are used in modern literature to generate a variety of solutions, of which only a few successful models exist.

The mel-frequency cepstral coefficient (MFCC), average magnitude difference function, cepstrum analysis and other voice characteristics are effectively modeled and implemented using mathematical modeling with variable weights parametric for each algorithm, which can be used with or without noises. Improvising the design characteristics and their weights with different supervised algorithms that regulate the design model simulation.

Different data models have been influenced by the parametric range and solution analysis in different space parameters, such as frequency or time model, with features such as without, with and after noise reduction. The frequency response of the current design can be analyzed through the Windowing techniques.

A new model and its implementation scenario with pervasive computational algorithms’ (PCA) (such as the hybrid PCA with AdaBoost (HPCA), PCA with bag of features and improved PCA with bag of features) relating the different features such as MFCC, power spectrum, pitch, Window techniques, etc. are calculated using the HPCA. The features are accumulated on the matrix formulations and govern the design feature comparison and its feature classification for improved performance parameters, as mentioned in the results.

]]>
Classification of disordered patient’s voice by using pervasive computational algorithms10.1108/IJPCC-07-2021-0158International Journal of Pervasive Computing and Communications2022-01-25© 2022 Emerald Publishing LimitedAnil Kumar MaddaliHabibulla KhanInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-01-2510.1108/IJPCC-07-2021-0158https://www.emerald.com/insight/content/doi/10.1108/IJPCC-07-2021-0158/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
A novel approach for detection and classification of re-entrant crack using modified CNNetworkhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-08-2021-0200/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestIn the purpose of the section, the cracks that are in the construction domain may be common and usually fixed with the human inspection which is at the visible range, but for the cracks which may exist at the distant place for the human eye in the same building but can be captured with the camera. If the crack size is quite big can be visible but few cracks will be present due to the flaws in the construction of walls which needs authentic information and confirmation about it for the successful completion of the wall cracks, as these cracks in the wall will result in the structure collapse. In the modern era of digital image processing, it has captured the importance in all the domain of engineering and all the fields irrespective of the division of the engineering, hence, in this research study an attempt is made to deal with the wall cracks which are found or searched during the building inspection process, in the present context in association with the unique U-net architecture is used with convolutional neural network method. In the construction domain, the cracks may be common and usually fixed with the human inspection which is at the visible range, but for the cracks which may exist at the distant place for the human eye in the same building but can be captured with the camera. If the crack size is quite big can be visible but few cracks will be present due to the flaws in the construction of walls which needs authentic information and confirmation about it for the successful completion of the wall cracks, as these cracks in the wall will result in the structure collapse. Hence, for the modeling of the proposed system, it is considered with the image database from the Mendeley portal for the analysis. With the experimental analysis, it is noted and observed that the proposed system was able to detect the wall cracks, search the flat surface by the result of no cracks found and it is successful in dealing with the two phases of operation, namely, classification and segmentation with the deep learning technique. In contrast to other conventional methodologies, the proposed methodology produces excellent performance results. The originality of the paper is to find the portion of the cracks on the walls using deep learning architecture.A novel approach for detection and classification of re-entrant crack using modified CNNetwork
Shadrack Fred Mahenge, Ala Alsanabani
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

In the purpose of the section, the cracks that are in the construction domain may be common and usually fixed with the human inspection which is at the visible range, but for the cracks which may exist at the distant place for the human eye in the same building but can be captured with the camera. If the crack size is quite big can be visible but few cracks will be present due to the flaws in the construction of walls which needs authentic information and confirmation about it for the successful completion of the wall cracks, as these cracks in the wall will result in the structure collapse.

In the modern era of digital image processing, it has captured the importance in all the domain of engineering and all the fields irrespective of the division of the engineering, hence, in this research study an attempt is made to deal with the wall cracks which are found or searched during the building inspection process, in the present context in association with the unique U-net architecture is used with convolutional neural network method.

In the construction domain, the cracks may be common and usually fixed with the human inspection which is at the visible range, but for the cracks which may exist at the distant place for the human eye in the same building but can be captured with the camera. If the crack size is quite big can be visible but few cracks will be present due to the flaws in the construction of walls which needs authentic information and confirmation about it for the successful completion of the wall cracks, as these cracks in the wall will result in the structure collapse. Hence, for the modeling of the proposed system, it is considered with the image database from the Mendeley portal for the analysis. With the experimental analysis, it is noted and observed that the proposed system was able to detect the wall cracks, search the flat surface by the result of no cracks found and it is successful in dealing with the two phases of operation, namely, classification and segmentation with the deep learning technique. In contrast to other conventional methodologies, the proposed methodology produces excellent performance results.

The originality of the paper is to find the portion of the cracks on the walls using deep learning architecture.

]]>
A novel approach for detection and classification of re-entrant crack using modified CNNetwork10.1108/IJPCC-08-2021-0200International Journal of Pervasive Computing and Communications2021-12-21© 2021 Emerald Publishing LimitedShadrack Fred MahengeAla AlsanabaniInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2021-12-2110.1108/IJPCC-08-2021-0200https://www.emerald.com/insight/content/doi/10.1108/IJPCC-08-2021-0200/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2021 Emerald Publishing Limited
Weighted ensemble classifier for malicious link detection using natural language processinghttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-09-2022-0312/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestThe internet has completely merged into contemporary life. People are addicted to using internet services for everyday activities. Consequently, an abundance of information about people and organizations is available online, which encourages the proliferation of cybercrimes. Cybercriminals often use malicious links for large-scale cyberattacks, which are disseminated via email, SMS and social media. Recognizing malicious links online can be exceedingly challenging. The purpose of this paper is to present a strong security system that can detect malicious links in the cyberspace using natural language processing technique. The researcher recommends a variety of approaches, including blacklisting and rules-based machine/deep learning, for automatically recognizing malicious links. But the approaches generally necessitate the generation of a set of features to generalize the detection process. Most of the features are generated by processing URLs and content of the web page, as well as some external features such as the ranking of the web page and domain name system information. This process of feature extraction and selection typically takes more time and demands a high level of expertise in the domain. Sometimes the generated features may not leverage the full potentials of the data set. In addition, the majority of the currently deployed systems make use of a single classifier for the classification of malicious links. However, prediction accuracy may vary widely depending on the data set and the classifier used. To address the issue of generating feature sets, the proposed method uses natural language processing techniques (term frequency and inverse document frequency) that vectorize URLs. To build a robust system for the classification of malicious links, the proposed system implements weighted soft voting classifier, an ensemble classifier that combines predictions of base classifiers. The ability or skill of each classifier serves as the base for the weight that is assigned to it. The proposed method performs better when the optimal weights are assigned. The performance of the proposed method was assessed by using two different data sets (D1 and D2) and compared performance against base machine learning classifiers and previous research results. The outcome accuracy shows that the proposed method is superior to the existing methods, offering 91.4% and 98.8% accuracy for data sets D1 and D2, respectively.Weighted ensemble classifier for malicious link detection using natural language processing
Saleem Raja A., Sundaravadivazhagan Balasubaramanian, Pradeepa Ganesan, Justin Rajasekaran, Karthikeyan R.
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

The internet has completely merged into contemporary life. People are addicted to using internet services for everyday activities. Consequently, an abundance of information about people and organizations is available online, which encourages the proliferation of cybercrimes. Cybercriminals often use malicious links for large-scale cyberattacks, which are disseminated via email, SMS and social media. Recognizing malicious links online can be exceedingly challenging. The purpose of this paper is to present a strong security system that can detect malicious links in the cyberspace using natural language processing technique.

The researcher recommends a variety of approaches, including blacklisting and rules-based machine/deep learning, for automatically recognizing malicious links. But the approaches generally necessitate the generation of a set of features to generalize the detection process. Most of the features are generated by processing URLs and content of the web page, as well as some external features such as the ranking of the web page and domain name system information. This process of feature extraction and selection typically takes more time and demands a high level of expertise in the domain. Sometimes the generated features may not leverage the full potentials of the data set. In addition, the majority of the currently deployed systems make use of a single classifier for the classification of malicious links. However, prediction accuracy may vary widely depending on the data set and the classifier used.

To address the issue of generating feature sets, the proposed method uses natural language processing techniques (term frequency and inverse document frequency) that vectorize URLs. To build a robust system for the classification of malicious links, the proposed system implements weighted soft voting classifier, an ensemble classifier that combines predictions of base classifiers. The ability or skill of each classifier serves as the base for the weight that is assigned to it.

The proposed method performs better when the optimal weights are assigned. The performance of the proposed method was assessed by using two different data sets (D1 and D2) and compared performance against base machine learning classifiers and previous research results. The outcome accuracy shows that the proposed method is superior to the existing methods, offering 91.4% and 98.8% accuracy for data sets D1 and D2, respectively.

]]>
Weighted ensemble classifier for malicious link detection using natural language processing10.1108/IJPCC-09-2022-0312International Journal of Pervasive Computing and Communications2023-01-03© 2022 Emerald Publishing LimitedSaleem Raja A.Sundaravadivazhagan BalasubaramanianPradeepa GanesanJustin RajasekaranKarthikeyan R.International Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2023-01-0310.1108/IJPCC-09-2022-0312https://www.emerald.com/insight/content/doi/10.1108/IJPCC-09-2022-0312/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
Detection IoT attacks using Lasso regression algorithm with ensemble classifierhttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-09-2022-0316/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestInternet of Things (IoT) is a network, which provides the connection with various physical objects such as smart machines, smart home appliance and so on. The physical objects are allocated with a unique internet address, namely, Internet Protocol, which is used to perform the data broadcasting with the external objects using the internet. The sudden increment in the number of attacks generated by intruders, causes security-related problems in IoT devices while performing the communication. The main purpose of this paper is to develop an effective attack detection to enhance the robustness against the attackers in IoT. In this research, the lasso regression algorithm is proposed along with ensemble classifier for identifying the IoT attacks. The lasso algorithm is used for the process of feature selection that modeled fewer parameters for the sparse models. The type of regression is analyzed for showing higher levels when certain parts of model selection is needed for parameter elimination. The lasso regression obtains the subset for predictors to lower the prediction error with respect to the quantitative response variable. The lasso does not impose a constraint for modeling the parameters caused the coefficients with some variables shrink as zero. The selected features are classified by using an ensemble classifier, that is important for linear and nonlinear types of data in the dataset, and the models are combined for handling these data types. The lasso regression with ensemble classifier–based attack classification comprises distributed denial-of-service and Mirai botnet attacks which achieved an improved accuracy of 99.981% than the conventional deep neural network (DNN) methods. Here, an efficient lasso regression algorithm is developed for extracting the features to perform the network anomaly detection using ensemble classifier.Detection IoT attacks using Lasso regression algorithm with ensemble classifier
K.V. Sheelavathy, V. Udaya Rani
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Internet of Things (IoT) is a network, which provides the connection with various physical objects such as smart machines, smart home appliance and so on. The physical objects are allocated with a unique internet address, namely, Internet Protocol, which is used to perform the data broadcasting with the external objects using the internet. The sudden increment in the number of attacks generated by intruders, causes security-related problems in IoT devices while performing the communication. The main purpose of this paper is to develop an effective attack detection to enhance the robustness against the attackers in IoT.

In this research, the lasso regression algorithm is proposed along with ensemble classifier for identifying the IoT attacks. The lasso algorithm is used for the process of feature selection that modeled fewer parameters for the sparse models. The type of regression is analyzed for showing higher levels when certain parts of model selection is needed for parameter elimination. The lasso regression obtains the subset for predictors to lower the prediction error with respect to the quantitative response variable. The lasso does not impose a constraint for modeling the parameters caused the coefficients with some variables shrink as zero. The selected features are classified by using an ensemble classifier, that is important for linear and nonlinear types of data in the dataset, and the models are combined for handling these data types.

The lasso regression with ensemble classifier–based attack classification comprises distributed denial-of-service and Mirai botnet attacks which achieved an improved accuracy of 99.981% than the conventional deep neural network (DNN) methods.

Here, an efficient lasso regression algorithm is developed for extracting the features to perform the network anomaly detection using ensemble classifier.

]]>
Detection IoT attacks using Lasso regression algorithm with ensemble classifier10.1108/IJPCC-09-2022-0316International Journal of Pervasive Computing and Communications2022-12-29© 2022 Emerald Publishing LimitedK.V. SheelavathyV. Udaya RaniInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-12-2910.1108/IJPCC-09-2022-0316https://www.emerald.com/insight/content/doi/10.1108/IJPCC-09-2022-0316/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited
IoT-based multimodal liveness detection using the fusion of ECG and fingerprinthttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-10-2021-0248/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestBiometric scans using fingerprints are widely used for security purposes. Eventually, for authentication purposes, fingerprint scans are not very reliable because they can be faked by obtaining a sample of the fingerprint of the person. There are a few spoof detection techniques available to reduce the incidence of spoofing of the biometric system. Among them, the most commonly used is the binary classification technique that detects real or fake fingerprints based on the fingerprint samples provided during training. However, this technique fails when it is provided with samples formed using other spoofing techniques that are different from the spoofing techniques covered in the training samples. This paper aims to improve the liveness detection accuracy by fusing electrocardiogram (ECG) and fingerprint. In this paper, to avoid this limitation, an efficient liveness detection algorithm is developed using the fusion of ECG signals captured from the fingertips and fingerprint data in Internet of Things (IoT) environment. The ECG signal will ensure the detection of real fingerprint samples from fake ones. Single model fingerprint methods have some disadvantages, such as noisy data and position of the fingerprint. To overcome this, fusion of both ECG and fingerprint is done so that the combined data improves the detection accuracy. System security is improved in this approach, and the fingerprint recognition rate is also improved. IoT-based approach is used in this work to reduce the computation burden of data processing systems.IoT-based multimodal liveness detection using the fusion of ECG and fingerprint
Anil Kumar Gona, Subramoniam M.
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

Biometric scans using fingerprints are widely used for security purposes. Eventually, for authentication purposes, fingerprint scans are not very reliable because they can be faked by obtaining a sample of the fingerprint of the person. There are a few spoof detection techniques available to reduce the incidence of spoofing of the biometric system. Among them, the most commonly used is the binary classification technique that detects real or fake fingerprints based on the fingerprint samples provided during training. However, this technique fails when it is provided with samples formed using other spoofing techniques that are different from the spoofing techniques covered in the training samples. This paper aims to improve the liveness detection accuracy by fusing electrocardiogram (ECG) and fingerprint.

In this paper, to avoid this limitation, an efficient liveness detection algorithm is developed using the fusion of ECG signals captured from the fingertips and fingerprint data in Internet of Things (IoT) environment. The ECG signal will ensure the detection of real fingerprint samples from fake ones.

Single model fingerprint methods have some disadvantages, such as noisy data and position of the fingerprint. To overcome this, fusion of both ECG and fingerprint is done so that the combined data improves the detection accuracy.

System security is improved in this approach, and the fingerprint recognition rate is also improved. IoT-based approach is used in this work to reduce the computation burden of data processing systems.

]]>
IoT-based multimodal liveness detection using the fusion of ECG and fingerprint10.1108/IJPCC-10-2021-0248International Journal of Pervasive Computing and Communications2022-08-16© 2020 Emerald Publishing LimitedAnil Kumar GonaSubramoniam M.International Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-08-1610.1108/IJPCC-10-2021-0248https://www.emerald.com/insight/content/doi/10.1108/IJPCC-10-2021-0248/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2020 Emerald Publishing Limited
AI-federated novel delay-aware link-scheduling for Industry 4.0 applications in IoT networkshttps://www.emerald.com/insight/content/doi/10.1108/IJPCC-12-2021-0297/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatestWith the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy. With an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0. The concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.AI-federated novel delay-aware link-scheduling for Industry 4.0 applications in IoT networks
Suvarna Abhijit Patil, Prasad Kishor Gokhale
International Journal of Pervasive Computing and Communications, Vol. ahead-of-print, No. ahead-of-print, pp.-

With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy.

With an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation.

Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0.

The concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.

]]>
AI-federated novel delay-aware link-scheduling for Industry 4.0 applications in IoT networks10.1108/IJPCC-12-2021-0297International Journal of Pervasive Computing and Communications2022-06-22© 2022 Emerald Publishing LimitedSuvarna Abhijit PatilPrasad Kishor GokhaleInternational Journal of Pervasive Computing and Communicationsahead-of-printahead-of-print2022-06-2210.1108/IJPCC-12-2021-0297https://www.emerald.com/insight/content/doi/10.1108/IJPCC-12-2021-0297/full/html?utm_source=rss&utm_medium=feed&utm_campaign=rss_journalLatest© 2022 Emerald Publishing Limited