Search results

1 – 10 of 13
Article
Publication date: 12 January 2024

Priya Mishra and Aleena Swetapadma

Sleep arousal detection is an important factor to monitor the sleep disorder.

51

Abstract

Purpose

Sleep arousal detection is an important factor to monitor the sleep disorder.

Design/methodology/approach

Thus, a unique nth layer one-dimensional (1D) convolutional neural network-based U-Net model for automatic sleep arousal identification has been proposed.

Findings

The proposed method has achieved area under the precision–recall curve performance score of 0.498 and area under the receiver operating characteristics performance score of 0.946.

Originality/value

No other researchers have suggested U-Net-based detection of sleep arousal.

Research limitations/implications

From the experimental results, it has been found that U-Net performs better accuracy as compared to the state-of-the-art methods.

Practical implications

Sleep arousal detection is an important factor to monitor the sleep disorder. Objective of the work is to detect the sleep arousal using different physiological channels of human body.

Social implications

It will help in improving mental health by monitoring a person's sleep.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 22 June 2022

Suvarna Abhijit Patil and Prasad Kishor Gokhale

With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network…

Abstract

Purpose

With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy.

Design/methodology/approach

With an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation.

Findings

Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0.

Originality/value

The concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.

Details

International Journal of Pervasive Computing and Communications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 28 February 2023

Annie Singla and Rajat Agrawal

This paper aims to propose DisDSS: a Web-based smart disaster management (DM) system for decision-making that will assist disaster professionals in determining the nature of…

Abstract

Purpose

This paper aims to propose DisDSS: a Web-based smart disaster management (DM) system for decision-making that will assist disaster professionals in determining the nature of disaster-related social media (SM) messages. The research classifies the tweets into need-based, availability-based, situational-based, general and irrelevant categories and visualizes them on a web interface, location-wise.

Design/methodology/approach

It is worth mentioning that a fusion-based deep learning (DL) model is introduced to objectively determine the nature of an SM message. The proposed model uses the convolution neural network and bidirectional long short-term memory network layers.

Findings

The developed system leads to a better performance in accuracy, precision, recall, F-score, area under receiver operating characteristic curve and area under precision-recall curve, compared to other state-of-the-art methods in the literature. The contribution of this paper is three fold. First, it presents a new covid data set of SM messages with the label of nature of the message. Second, it offers a fusion-based DL model to classify SM data. Third, it presents a Web-based interface to visualize the structured information.

Originality/value

The architecture of DisDSS is analyzed based on the practical case study, i.e. COVID-19. The proposed DL-based model is embedded into a Web-based interface for decision support. To the best of the authors’ knowledge, this is India’s first SM-based DM system.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 25 April 2023

Nehal Elshaboury, Eslam Mohammed Abdelkader, Abobakr Al-Sakkaf and Ashutosh Bagchi

The energy efficiency of buildings has been emphasized along with the continual development in the building and construction sector that consumes a significant amount of energy…

99

Abstract

Purpose

The energy efficiency of buildings has been emphasized along with the continual development in the building and construction sector that consumes a significant amount of energy. To this end, the purpose of this research paper is to forecast energy consumption to improve energy resource planning and management.

Design/methodology/approach

This study proposes the application of the convolutional neural network (CNN) for estimating the electricity consumption in the Grey Nuns building in Canada. The performance of the proposed model is compared against that of long short-term memory (LSTM) and multilayer perceptron (MLP) neural networks. The models are trained and tested using monthly electricity consumption records (i.e. from May 2009 to December 2021) available from Concordia’s facility department. Statistical measures (e.g. determination coefficient [R2], root mean squared error [RMSE], mean absolute error [MAE] and mean absolute percentage error [MAPE]) are used to evaluate the outcomes of models.

Findings

The results reveal that the CNN model outperforms the other model predictions for 6 and 12 months ahead. It enhances the performance metrics reported by the LSTM and MLP models concerning the R2, RMSE, MAE and MAPE by more than 4%, 6%, 42% and 46%, respectively. Therefore, the proposed model uses the available data to predict the electricity consumption for 6 and 12 months ahead. In June and December 2022, the overall electricity consumption is estimated to be 195,312 kWh and 254,737 kWh, respectively.

Originality/value

This study discusses the development of an effective time-series model that can forecast future electricity consumption in a Canadian heritage building. Deep learning techniques are being used for the first time to anticipate the electricity consumption of the Grey Nuns building in Canada. Additionally, it evaluates the effectiveness of deep learning and machine learning methods for predicting electricity consumption using established performance indicators. Recognizing electricity consumption in buildings is beneficial for utility providers, facility managers and end users by improving energy and environmental efficiency.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 29 December 2023

B. Vasavi, P. Dileep and Ulligaddala Srinivasarao

Aspect-based sentiment analysis (ASA) is a task of sentiment analysis that requires predicting aspect sentiment polarity for a given sentence. Many traditional techniques use…

Abstract

Purpose

Aspect-based sentiment analysis (ASA) is a task of sentiment analysis that requires predicting aspect sentiment polarity for a given sentence. Many traditional techniques use graph-based mechanisms, which reduce prediction accuracy and introduce large amounts of noise. The other problem with graph-based mechanisms is that for some context words, the feelings change depending on the aspect, and therefore it is impossible to draw conclusions on their own. ASA is challenging because a given sentence can reveal complicated feelings about multiple aspects.

Design/methodology/approach

This research proposed an optimized attention-based DL model known as optimized aspect and self-attention aware long short-term memory for target-based semantic analysis (OAS-LSTM-TSA). The proposed model goes through three phases: preprocessing, aspect extraction and classification. Aspect extraction is done using a double-layered convolutional neural network (DL-CNN). The optimized aspect and self-attention embedded LSTM (OAS-LSTM) is used to classify aspect sentiment into three classes: positive, neutral and negative.

Findings

To detect and classify sentiment polarity of the aspect using the optimized aspect and self-attention embedded LSTM (OAS-LSTM) model. The results of the proposed method revealed that it achieves a high accuracy of 95.3 per cent for the restaurant dataset and 96.7 per cent for the laptop dataset.

Originality/value

The novelty of the research work is the addition of two effective attention layers in the network model, loss function reduction and accuracy enhancement, using a recent efficient optimization algorithm. The loss function in OAS-LSTM is minimized using the adaptive pelican optimization algorithm, thus increasing the accuracy rate. The performance of the proposed method is validated on four real-time datasets, Rest14, Lap14, Rest15 and Rest16, for various performance metrics.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 30 August 2023

Yi-Hung Liu, Sheng-Fong Chen and Dan-Wei (Marian) Wen

Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case…

Abstract

Purpose

Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case report information can assist the general public in appropriately managing their diseases. Therefore, this paper aims to introduce a novel deep learning-based method that allows non-professionals to make inquiries using ordinary vocabulary, retrieving the most relevant case reports for accurate and effective health information.

Design/methodology/approach

The dataset of case reports was collected from both the patient-generated research network and the digital medical journal repository. To enhance the accuracy of obtaining relevant case reports, the authors propose a retrieval approach that combines BERT and BiLSTM methods. The authors identified representative health-related case reports and analyzed the retrieval performance, as well as user judgments.

Findings

This study aims to provide the necessary functionalities to deliver relevant health case reports based on input from ordinary terms. The proposed framework includes features for health management, user feedback acquisition and ranking by weights to obtain the most pertinent case reports.

Originality/value

This study contributes to health information systems by analyzing patients' experiences and treatments with the case report retrieval model. The results of this study can provide immense benefit to the general public who intend to find treatment decisions and experiences from relevant case reports.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 22 February 2024

Yuzhuo Wang, Chengzhi Zhang, Min Song, Seongdeok Kim, Youngsoo Ko and Juhee Lee

In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers…

137

Abstract

Purpose

In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers, making mention frequency a classical indicator of their popularity and influence. However, contemporary methods for evaluating influence tend to focus solely on individual algorithms, disregarding the collective impact resulting from the interconnectedness of these algorithms, which can provide a new way to reveal their roles and importance within algorithm clusters. This paper aims to build the co-occurrence network of algorithms in the natural language processing field based on the full-text content of academic papers and analyze the academic influence of algorithms in the group based on the features of the network.

Design/methodology/approach

We use deep learning models to extract algorithm entities from articles and construct the whole, cumulative and annual co-occurrence networks. We first analyze the characteristics of algorithm networks and then use various centrality metrics to obtain the score and ranking of group influence for each algorithm in the whole domain and each year. Finally, we analyze the influence evolution of different representative algorithms.

Findings

The results indicate that algorithm networks also have the characteristics of complex networks, with tight connections between nodes developing over approximately four decades. For different algorithms, algorithms that are classic, high-performing and appear at the junctions of different eras can possess high popularity, control, central position and balanced influence in the network. As an algorithm gradually diminishes its sway within the group, it typically loses its core position first, followed by a dwindling association with other algorithms.

Originality/value

To the best of the authors’ knowledge, this paper is the first large-scale analysis of algorithm networks. The extensive temporal coverage, spanning over four decades of academic publications, ensures the depth and integrity of the network. Our results serve as a cornerstone for constructing multifaceted networks interlinking algorithms, scholars and tasks, facilitating future exploration of their scientific roles and semantic relations.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 5 December 2023

Dezhao Tang, Qiqi Cai, Tiandan Nie, Yuanyuan Zhang and Jinghua Wu

Integrating artificial intelligence and quantitative investment has given birth to various agricultural futures price prediction models suitable for nonlinear and non-stationary…

Abstract

Purpose

Integrating artificial intelligence and quantitative investment has given birth to various agricultural futures price prediction models suitable for nonlinear and non-stationary data. However, traditional models have limitations in testing the spatial transmission relationship in time series, and the actual prediction effect is restricted by the inability to obtain the prices of other variable factors in the future.

Design/methodology/approach

To explore the impact of spatiotemporal factors on agricultural prices and achieve the best prediction effect, the authors innovatively propose a price prediction method for China's soybean and palm oil futures prices. First, an improved Granger Causality Test was adopted to explore the spatial transmission relationship in the data; second, the Seasonal and Trend decomposition using Loess model (STL) was employed to decompose the price; then, the Apriori algorithm was applied to test the time spillover effect between data, and CRITIC was used to extract essential features; finally, the N-Beats model was selected as the prediction model for futures prices.

Findings

Using the Apriori and STL algorithms, the authors found a spillover effect in agricultural prices, and past trends and seasonal data will impact future prices. Using the improved Granger causality test method to analyze the unidirectional causality relationship between the prices, the authors obtained a spatial effect among the agricultural product prices. By comparison, the N-Beats model based on the spatiotemporal factors shows excellent prediction effects on different prices.

Originality/value

This paper addressed the problem that traditional models can only predict the current prices of different agricultural products on the same date, and traditional spatial models cannot test the characteristics of time series. This result is beneficial to the sustainable development of agriculture and provides necessary numerical and technical support to ensure national agricultural security.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 18 January 2024

Jing Tang, Yida Guo and Yilin Han

Coal is a critical global energy source, and fluctuations in its price significantly impact related enterprises' profitability. This study aims to develop a robust model for…

Abstract

Purpose

Coal is a critical global energy source, and fluctuations in its price significantly impact related enterprises' profitability. This study aims to develop a robust model for predicting the coal price index to enhance coal purchase strategies for coal-consuming enterprises and provide crucial information for global carbon emission reduction.

Design/methodology/approach

The proposed coal price forecasting system combines data decomposition, semi-supervised feature engineering, ensemble learning and deep learning. It addresses the challenge of merging low-resolution and high-resolution data by adaptively combining both types of data and filling in missing gaps through interpolation for internal missing data and self-supervision for initiate/terminal missing data. The system employs self-supervised learning to complete the filling of complex missing data.

Findings

The ensemble model, which combines long short-term memory, XGBoost and support vector regression, demonstrated the best prediction performance among the tested models. It exhibited superior accuracy and stability across multiple indices in two datasets, namely the Bohai-Rim steam-coal price index and coal daily settlement price.

Originality/value

The proposed coal price forecasting system stands out as it integrates data decomposition, semi-supervised feature engineering, ensemble learning and deep learning. Moreover, the system pioneers the use of self-supervised learning for filling in complex missing data, contributing to its originality and effectiveness.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 8 January 2024

Na Ye, Dingguo Yu, Xiaoyu Ma, Yijie Zhou and Yanqin Yan

Fake news in cyberspace has greatly interfered with national governance, economic development and cultural communication, which has greatly increased the demand for fake news…

Abstract

Purpose

Fake news in cyberspace has greatly interfered with national governance, economic development and cultural communication, which has greatly increased the demand for fake news detection and intervention. At present, the recognition methods based on news content all lose part of the information to varying degrees. This paper proposes a lightweight content-based detection method to achieve early identification of false information with low computation costs.

Design/methodology/approach

The authors' research proposes a lightweight fake news detection framework for English text, including a new textual feature extraction method, specifically mapping English text and symbols to 0–255 using American Standard Code for Information Interchange (ASCII) codes, treating the completed sequence of numbers as the values of picture pixel points and using a computer vision model to detect them. The authors also compare the authors' framework with traditional word2vec, Glove, bidirectional encoder representations from transformers (BERT) and other methods.

Findings

The authors conduct experiments on the lightweight neural networks Ghostnet and Shufflenet, and the experimental results show that the authors' proposed framework outperforms the baseline in accuracy on both lightweight networks.

Originality/value

The authors' method does not rely on additional information from text data and can efficiently perform the fake news detection task with less computational resource consumption. In addition, the feature extraction method of this framework is relatively new and enlightening for text content-based classification detection, which can detect fake news in time at the early stage of fake news propagation.

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of 13