Search results

1 – 10 of 76
Article
Publication date: 22 April 2022

Sreedhar Jyothi and Geetanjali Nelloru

Patients having ventricular arrhythmias and atrial fibrillation, that are early markers of stroke and sudden cardiac death, as well as benign subjects are all studied using the…

Abstract

Purpose

Patients having ventricular arrhythmias and atrial fibrillation, that are early markers of stroke and sudden cardiac death, as well as benign subjects are all studied using the electrocardiogram (ECG). In order to identify cardiac anomalies, ECG signals analyse the heart's electrical activity and show output in the form of waveforms. Patients with these disorders must be identified as soon as possible. ECG signals can be difficult, time-consuming and subject to inter-observer variability when inspected manually.

Design/methodology/approach

There are various forms of arrhythmias that are difficult to distinguish in complicated non-linear ECG data. It may be beneficial to use computer-aided decision support systems (CAD). It is possible to classify arrhythmias in a rapid, accurate, repeatable and objective manner using the CAD, which use machine learning algorithms to identify the tiny changes in cardiac rhythms. Cardiac infractions can be classified and detected using this method. The authors want to categorize the arrhythmia with better accurate findings in even less computational time as the primary objective. Using signal and axis characteristics and their association n-grams as features, this paper makes a significant addition to the field. Using a benchmark dataset as input to multi-label multi-fold cross-validation, an experimental investigation was conducted.

Findings

This dataset was used as input for cross-validation on contemporary models and the resulting cross-validation metrics have been weighed against the performance metrics of other contemporary models. There have been few false alarms with the suggested model's high sensitivity and specificity.

Originality/value

The results of cross validation are significant. In terms of specificity, sensitivity, and decision accuracy, the proposed model outperforms other contemporary models.

Details

International Journal of Intelligent Unmanned Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2049-6427

Keywords

Open Access
Article
Publication date: 7 October 2021

Enas M.F. El Houby

Diabetic retinopathy (DR) is one of the dangerous complications of diabetes. Its grade level must be tracked to manage its progress and to start the appropriate decision for…

2565

Abstract

Purpose

Diabetic retinopathy (DR) is one of the dangerous complications of diabetes. Its grade level must be tracked to manage its progress and to start the appropriate decision for treatment in time. Effective automated methods for the detection of DR and the classification of its severity stage are necessary to reduce the burden on ophthalmologists and diagnostic contradictions among manual readers.

Design/methodology/approach

In this research, convolutional neural network (CNN) was used based on colored retinal fundus images for the detection of DR and classification of its stages. CNN can recognize sophisticated features on the retina and provides an automatic diagnosis. The pre-trained VGG-16 CNN model was applied using a transfer learning (TL) approach to utilize the already learned parameters in the detection.

Findings

By conducting different experiments set up with different severity groupings, the achieved results are promising. The best-achieved accuracies for 2-class, 3-class, 4-class and 5-class classifications are 86.5, 80.5, 63.5 and 73.7, respectively.

Originality/value

In this research, VGG-16 was used to detect and classify DR stages using the TL approach. Different combinations of classes were used in the classification of DR severity stages to illustrate the ability of the model to differentiate between the classes and verify the effect of these changes on the performance of the model.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 20 March 2024

Ziming Zhou, Fengnian Zhao and David Hung

Higher energy conversion efficiency of internal combustion engine can be achieved with optimal control of unsteady in-cylinder flow fields inside a direct-injection (DI) engine…

Abstract

Purpose

Higher energy conversion efficiency of internal combustion engine can be achieved with optimal control of unsteady in-cylinder flow fields inside a direct-injection (DI) engine. However, it remains a daunting task to predict the nonlinear and transient in-cylinder flow motion because they are highly complex which change both in space and time. Recently, machine learning methods have demonstrated great promises to infer relatively simple temporal flow field development. This paper aims to feature a physics-guided machine learning approach to realize high accuracy and generalization prediction for complex swirl-induced flow field motions.

Design/methodology/approach

To achieve high-fidelity time-series prediction of unsteady engine flow fields, this work features an automated machine learning framework with the following objectives: (1) The spatiotemporal physical constraint of the flow field structure is transferred to machine learning structure. (2) The ML inputs and targets are efficiently designed that ensure high model convergence with limited sets of experiments. (3) The prediction results are optimized by ensemble learning mechanism within the automated machine learning framework.

Findings

The proposed data-driven framework is proven effective in different time periods and different extent of unsteadiness of the flow dynamics, and the predicted flow fields are highly similar to the target field under various complex flow patterns. Among the described framework designs, the utilization of spatial flow field structure is the featured improvement to the time-series flow field prediction process.

Originality/value

The proposed flow field prediction framework could be generalized to different crank angle periods, cycles and swirl ratio conditions, which could greatly promote real-time flow control and reduce experiments on in-cylinder flow field measurement and diagnostics.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 21 December 2023

Oladosu Oyebisi Oladimeji and Ayodeji Olusegun J. Ibitoye

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the…

1072

Abstract

Purpose

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.

Design/methodology/approach

To selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.

Findings

The ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.

Practical implications

Since ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.

Originality/value

This research has not been published anywhere else.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 11 July 2023

Abhinandan Chatterjee, Pradip Bala, Shruti Gedam, Sanchita Paul and Nishant Goyal

Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for…

Abstract

Purpose

Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for diagnosing depression because they reflect the operating status of the human brain. The purpose of this study is the early detection of depression among people using EEG signals.

Design/methodology/approach

(i) Artifacts are removed by filtering and linear and non-linear features are extracted; (ii) feature scaling is done using a standard scalar while principal component analysis (PCA) is used for feature reduction; (iii) the linear, non-linear and combination of both (only for those whose accuracy is highest) are taken for further analysis where some ML and DL classifiers are applied for the classification of depression; and (iv) in this study, total 15 distinct ML and DL methods, including KNN, SVM, bagging SVM, RF, GB, Extreme Gradient Boosting, MNB, Adaboost, Bagging RF, BootAgg, Gaussian NB, RNN, 1DCNN, RBFNN and LSTM, that have been effectively utilized as classifiers to handle a variety of real-world issues.

Findings

1. Among all, alpha, alpha asymmetry, gamma and gamma asymmetry give the best results in linear features, while RWE, DFA, CD and AE give the best results in non-linear feature. 2. In the linear features, gamma and alpha asymmetry have given 99.98% accuracy for Bagging RF, while gamma asymmetry has given 99.98% accuracy for BootAgg. 3. For non-linear features, it has been shown 99.84% of accuracy for RWE and DFA in RF, 99.97% accuracy for DFA in XGBoost and 99.94% accuracy for RWE in BootAgg. 4. By using DL, in linear features, gamma asymmetry has given more than 96% accuracy in RNN and 91% accuracy in LSTM and for non-linear features, 89% accuracy has been achieved for CD and AE in LSTM. 5. By combining linear and non-linear features, the highest accuracy was achieved in Bagging RF (98.50%) gamma asymmetry + RWE. In DL, Alpha + RWE, Gamma asymmetry + CD and gamma asymmetry + RWE have achieved 98% accuracy in LSTM.

Originality/value

A novel dataset was collected from the Central Institute of Psychiatry (CIP), Ranchi which was recorded using a 128-channels whereas major previous studies used fewer channels; the details of the study participants are summarized and a model is developed for statistical analysis using N-way ANOVA; artifacts are removed by high and low pass filtering of epoch data followed by re-referencing and independent component analysis for noise removal; linear features, namely, band power and interhemispheric asymmetry and non-linear features, namely, relative wavelet energy, wavelet entropy, Approximate entropy, sample entropy, detrended fluctuation analysis and correlation dimension are extracted; this model utilizes Epoch (213,072) for 5 s EEG data, which allows the model to train for longer, thereby increasing the efficiency of classifiers. Features scaling is done using a standard scalar rather than normalization because it helps increase the accuracy of the models (especially for deep learning algorithms) while PCA is used for feature reduction; the linear, non-linear and combination of both features are taken for extensive analysis in conjunction with ML and DL classifiers for the classification of depression. The combination of linear and non-linear features (only for those whose accuracy is highest) is used for the best detection results.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 17 October 2023

Hatzav Yoffe, Noam Raanan, Shaked Fried, Pnina Plaut and Yasha Jacob Grobman

This study uses computer-aided design to improve the ecological and environmental sustainability of early-stage landscape designs. Urban expansion on open land and natural…

Abstract

Purpose

This study uses computer-aided design to improve the ecological and environmental sustainability of early-stage landscape designs. Urban expansion on open land and natural habitats has led to a decline in biodiversity and increased climate change impacts, affecting urban inhabitants' quality of life and well-being. While sustainability indicators have been employed to assess the performance of buildings and neighbourhoods, landscape designs' ecological and environmental sustainability has received comparatively less attention, particularly in early-design stages where applying sustainability approaches is impactful.

Design/methodology/approach

The authors propose a computation framework for evaluating key landscape sustainability indicators and providing real-time feedback to designers. The method integrates spatial indicators with widely recognized sustainability rating system credits. A specialized tool was developed for measuring biomass optimization, precipitation management and urban heat mitigation, and a proof-of-concept experiment tested the tool's effectiveness on three Mediterranean neighbourhood-level designs.

Findings

The results show a clear connection between the applied design strategy to the indicator behaviour. This connection enhances the ability to establish sustainability benchmarks for different types of landscape developments using parametric design.

Practical implications

The study allows non-expert designers to measure and embed landscape sustainability early in the design stages, thus lowering the entry level for incorporating biodiversity enhancement and climate mitigation approaches.

Originality/value

This study expands the parametric vocabulary for measuring landscape sustainability by introducing spatial ecosystem services and architectural sustainability indicators on a unified platform, enabling the integration of critical climate and biodiversity-loss solutions earlier in the development process.

Details

Archnet-IJAR: International Journal of Architectural Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2631-6862

Keywords

Article
Publication date: 31 October 2023

Yangze Liang and Zhao Xu

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components…

Abstract

Purpose

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components during the construction phase is predominantly done manually, resulting in low efficiency and hindering the progress of intelligent construction. This paper presents an intelligent inspection method for assessing the appearance quality of PC components, utilizing an enhanced you look only once (YOLO) model and multi-source data. The aim of this research is to achieve automated management of the appearance quality of precast components in the prefabricated construction process through digital means.

Design/methodology/approach

The paper begins by establishing an improved YOLO model and an image dataset for evaluating appearance quality. Through object detection in the images, a preliminary and efficient assessment of the precast components' appearance quality is achieved. Moreover, the detection results are mapped onto the point cloud for high-precision quality inspection. In the case of precast components with quality defects, precise quality inspection is conducted by combining the three-dimensional model data obtained from forward design conversion with the captured point cloud data through registration. Additionally, the paper proposes a framework for an automated inspection platform dedicated to assessing appearance quality in prefabricated buildings, encompassing the platform's hardware network.

Findings

The improved YOLO model achieved a best mean average precision of 85.02% on the VOC2007 dataset, surpassing the performance of most similar models. After targeted training, the model exhibits excellent recognition capabilities for the four common appearance quality defects. When mapped onto the point cloud, the accuracy of quality inspection based on point cloud data and forward design is within 0.1 mm. The appearance quality inspection platform enables feedback and optimization of quality issues.

Originality/value

The proposed method in this study enables high-precision, visualized and automated detection of the appearance quality of PC components. It effectively meets the demand for quality inspection of precast components on construction sites of prefabricated buildings, providing technological support for the development of intelligent construction. The design of the appearance quality inspection platform's logic and framework facilitates the integration of the method, laying the foundation for efficient quality management in the future.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 23 June 2023

Rubel, Bijay Prasad Kushwaha and Md Helal Miah

This study aims to highlight the inconsistency between conventional knowledge push judgements and the price of knowledge push. Also, a three-way decision-based relevant knowledge…

Abstract

Purpose

This study aims to highlight the inconsistency between conventional knowledge push judgements and the price of knowledge push. Also, a three-way decision-based relevant knowledge push algorithm was proposed.

Design/methodology/approach

Using a ratio of 80–20%, the experiment randomly splits the data into a training set and a test set. Each video is used as a knowledge unit (structure) in the research, and the category is used as a knowledge attribute. The limit is then determined using the user’s overall rating. To calculate the pertinent information obtained through experiments, the fusion coefficient is needed. The impact of the push model is then examined in comparison to the conventional push model. In the experiment, relevant knowledge is compared using three push models, two push models based on conventional International classification functioning (ICF), and three push models based on traditional ICF. The average push cost accuracy rate, recall rate and coverage rate are metrics used to assess the push effect.

Findings

The three-way knowledge push models perform better on average than the other push models in this research in terms of push cost, accuracy rate and recall rate. However, the three-way knowledge push models suggested in this study have a lower coverage rate than the two-way push model. So three-way knowledge push models condense the knowledge push and forfeit a particular coverage rate. As a result, improving knowledge results in higher accuracy rates and lower push costs.

Practical implications

This research has practical ramifications for the quick expansion of knowledge and its hegemonic status in value creation as the main methodology for knowledge services.

Originality/value

To the best of the authors’ knowledge, this is the first theory developed on the three-way decision-making process of knowledge push services to increase organizational effectiveness and efficiency.

Details

VINE Journal of Information and Knowledge Management Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5891

Keywords

Article
Publication date: 21 March 2024

Thamaraiselvan Natarajan, P. Pragha, Krantiraditya Dhalmahapatra and Deepak Ramanan Veera Raghavan

The metaverse, which is now revolutionizing how brands strategize their business needs, necessitates understanding individual opinions. Sentiment analysis deciphers emotions and…

Abstract

Purpose

The metaverse, which is now revolutionizing how brands strategize their business needs, necessitates understanding individual opinions. Sentiment analysis deciphers emotions and uncovers a deeper understanding of user opinions and trends within this digital realm. Further, sentiments signify the underlying factor that triggers one’s intent to use technology like the metaverse. Positive sentiments often correlate with positive user experiences, while negative sentiments may signify issues or frustrations. Brands may consider these sentiments and implement them on their metaverse platforms for a seamless user experience.

Design/methodology/approach

The current study adopts machine learning sentiment analysis techniques using Support Vector Machine, Doc2Vec, RNN, and CNN to explore the sentiment of individuals toward metaverse in a user-generated context. The topics were discovered using the topic modeling method, and sentiment analysis was performed subsequently.

Findings

The results revealed that the users had a positive notion about the experience and orientation of the metaverse while having a negative attitude towards the economy, data, and cyber security. The accuracy of each model has been analyzed, and it has been concluded that CNN provides better accuracy on an average of 89% compared to the other models.

Research limitations/implications

Analyzing sentiment can reveal how the general public perceives the metaverse. Positive sentiment may suggest enthusiasm and readiness for adoption, while negative sentiment might indicate skepticism or concerns. Given the positive user notions about the metaverse’s experience and orientation, developers should continue to focus on creating innovative and immersive virtual environments. At the same time, users' concerns about data, cybersecurity and the economy are critical. The negative attitude toward the metaverse’s economy suggests a need for innovation in economic models within the metaverse. Also, developers and platform operators should prioritize robust data security measures. Implementing strong encryption and two-factor authentication and educating users about cybersecurity best practices can address these concerns and enhance user trust.

Social implications

In terms of societal dynamics, the metaverse could revolutionize communication and relationships by altering traditional notions of proximity and the presence of its users. Further, virtual economies might emerge, with virtual assets having real-world value, presenting both opportunities and challenges for industries and regulators.

Originality/value

The current study contributes to research as it is the first of its kind to explore the sentiments of individuals toward the metaverse using deep learning techniques and evaluate the accuracy of these models.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 19 January 2024

Mohamed Marzouk and Mohamed Zaher

Facility management gained profound importance due to the increasing complexity of different systems and the cost of operation and maintenance. However, due to the increasing…

56

Abstract

Purpose

Facility management gained profound importance due to the increasing complexity of different systems and the cost of operation and maintenance. However, due to the increasing complexity of different systems, facility managers may suffer from a lack of information. The purpose of this paper is to propose a new facility management approach that links segmented assets to the vital data required for managing facilities.

Design/methodology/approach

Automatic point cloud segmentation is one of the most crucial processes required for modelling building facilities. In this research, laser scanning is used for point cloud acquisition. The research utilises region growing algorithm, colour-based region-growing algorithm and Euclidean cluster algorithm.

Findings

A case study is worked out to test the accuracy of the considered point cloud segmentation algorithms utilising metrics precision, recall and F-score. The results indicate that Euclidean cluster extraction and region growing algorithm revealed high accuracy for segmentation.

Originality/value

The research presents a comparative approach for selecting the most appropriate segmentation approach required for accurate modelling. As such, the segmented assets can be linked easily with the data required for facility management.

Details

International Journal of Building Pathology and Adaptation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-4708

Keywords

1 – 10 of 76