Search results

1 – 10 of 50
Article
Publication date: 1 December 2023

Hao Wang, Hamzeh Al Shraida and Yu Jin

Limited geometric accuracy is one of the major challenges that hinder the wider application of additive manufacturing (AM). This paper aims to predict in-plane shape deviation for…

Abstract

Purpose

Limited geometric accuracy is one of the major challenges that hinder the wider application of additive manufacturing (AM). This paper aims to predict in-plane shape deviation for online inspection and compensation to prevent error accumulation and improve shape fidelity in AM.

Design/methodology/approach

A sequence-to-sequence model with an attention mechanism (Seq2Seq+Attention) is proposed and implemented to predict subsequent layers or the occluded toolpath deviations after the multiresolution alignment. A shape compensation plan can be performed for the large deviation predicted.

Findings

The proposed Seq2Seq+Attention model is able to provide consistent prediction accuracy. The compensation plan proposed based on the predicted deviation can significantly improve the printing fidelity for those layers detected with large deviations.

Practical implications

Based on the experiments conducted on the knee joint samples, the proposed method outperforms the other three machine learning methods for both subsequent layer and occluded toolpath deviation prediction.

Originality/value

This work fills a research gap for predicting in-plane deviation not only for subsequent layers but also for occluded paths due to the missing scanning measurements. It is also combined with the multiresolution alignment and change point detection to determine the necessity of a compensation plan with updated G-code.

Details

Rapid Prototyping Journal, vol. 30 no. 2
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 12 January 2024

Priya Mishra and Aleena Swetapadma

Sleep arousal detection is an important factor to monitor the sleep disorder.

46

Abstract

Purpose

Sleep arousal detection is an important factor to monitor the sleep disorder.

Design/methodology/approach

Thus, a unique nth layer one-dimensional (1D) convolutional neural network-based U-Net model for automatic sleep arousal identification has been proposed.

Findings

The proposed method has achieved area under the precision–recall curve performance score of 0.498 and area under the receiver operating characteristics performance score of 0.946.

Originality/value

No other researchers have suggested U-Net-based detection of sleep arousal.

Research limitations/implications

From the experimental results, it has been found that U-Net performs better accuracy as compared to the state-of-the-art methods.

Practical implications

Sleep arousal detection is an important factor to monitor the sleep disorder. Objective of the work is to detect the sleep arousal using different physiological channels of human body.

Social implications

It will help in improving mental health by monitoring a person's sleep.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 25 January 2023

Hui Xu, Junjie Zhang, Hui Sun, Miao Qi and Jun Kong

Attention is one of the most important factors to affect the academic performance of students. Effectively analyzing students' attention in class can promote teachers' precise…

Abstract

Purpose

Attention is one of the most important factors to affect the academic performance of students. Effectively analyzing students' attention in class can promote teachers' precise teaching and students' personalized learning. To intelligently analyze the students' attention in classroom from the first-person perspective, this paper proposes a fusion model based on gaze tracking and object detection. In particular, the proposed attention analysis model does not depend on any smart equipment.

Design/methodology/approach

Given a first-person view video of students' learning, the authors first estimate the gazing point by using the deep space–time neural network. Second, single shot multi-box detector and fast segmentation convolutional neural network are comparatively adopted to accurately detect the objects in the video. Third, they predict the gazing objects by combining the results of gazing point estimation and object detection. Finally, the personalized attention of students is analyzed based on the predicted gazing objects and the measurable eye movement criteria.

Findings

A large number of experiments are carried out on a public database and a new dataset that is built in a real classroom. The experimental results show that the proposed model not only can accurately track the students' gazing trajectory and effectively analyze the fluctuation of attention of the individual student and all students but also provide a valuable reference to evaluate the process of learning of students.

Originality/value

The contributions of this paper can be summarized as follows. The analysis of students' attention plays an important role in improving teaching quality and student achievement. However, there is little research on how to automatically and intelligently analyze students' attention. To alleviate this problem, this paper focuses on analyzing students' attention by gaze tracking and object detection in classroom teaching, which is significant for practical application in the field of education. The authors proposed an effectively intelligent fusion model based on the deep neural network, which mainly includes the gazing point module and the object detection module, to analyze students' attention in classroom teaching instead of relying on any smart wearable device. They introduce the attention mechanism into the gazing point module to improve the performance of gazing point detection and perform some comparison experiments on the public dataset to prove that the gazing point module can achieve better performance. They associate the eye movement criteria with visual gaze to get quantifiable objective data for students' attention analysis, which can provide a valuable basis to evaluate the learning process of students, provide useful learning information of students for both parents and teachers and support the development of individualized teaching. They built a new database that contains the first-person view videos of 11 subjects in a real classroom and employ it to evaluate the effectiveness and feasibility of the proposed model.

Details

Data Technologies and Applications, vol. 57 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 2 April 2024

R.S. Vignesh and M. Monica Subashini

An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories…

Abstract

Purpose

An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories is different and also, there is insufficiency of high-scale databases for training. The purpose of the study is to provide high security.

Design/methodology/approach

In this research, optimization-assisted federated learning (FL) is introduced for thermoplastic waste segregation and classification. The deep learning (DL) network trained by Archimedes Henry gas solubility optimization (AHGSO) is used for the classification of plastic and resin types. The deep quantum neural networks (DQNN) is used for first-level classification and the deep max-out network (DMN) is employed for second-level classification. This developed AHGSO is obtained by blending the features of Archimedes optimization algorithm (AOA) and Henry gas solubility optimization (HGSO). The entities included in this approach are nodes and servers. Local training is carried out depending on local data and updations to the server are performed. Then, the model is aggregated at the server. Thereafter, each node downloads the global model and the update training is executed depending on the downloaded global and the local model till it achieves the satisfied condition. Finally, local update and aggregation at the server is altered based on the average method. The Data tag suite (DATS_2022) dataset is used for multilevel thermoplastic waste segregation and classification.

Findings

By using the DQNN in first-level classification the designed optimization-assisted FL has gained an accuracy of 0.930, mean average precision (MAP) of 0.933, false positive rate (FPR) of 0.213, loss function of 0.211, mean square error (MSE) of 0.328 and root mean square error (RMSE) of 0.572. In the second level classification, by using DMN the accuracy, MAP, FPR, loss function, MSE and RMSE are 0.932, 0.935, 0.093, 0.068, 0.303 and 0.551.

Originality/value

The multilevel thermoplastic waste segregation and classification using the proposed model is accurate and improves the effectiveness of the classification.

Article
Publication date: 29 November 2023

Tarun Jaiswal, Manju Pandey and Priyanka Tripathi

The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional…

Abstract

Purpose

The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Typical convolutional neural networks (CNNs) are unable to capture both local and global contextual information effectively and apply a uniform operation to all pixels in an image. To address this, we propose an innovative approach that integrates a dynamic convolution operation at the encoder stage, improving image encoding quality and disease detection. In addition, a decoder based on the gated recurrent unit (GRU) is used for language modeling, and an attention network is incorporated to enhance consistency. This novel combination allows for improved feature extraction, mimicking the expertise of radiologists by selectively focusing on important areas and producing coherent captions with valuable clinical information.

Design/methodology/approach

In this study, we have presented a new report generation approach that utilizes dynamic convolution applied Resnet-101 (DyCNN) as an encoder (Verelst and Tuytelaars, 2019) and GRU as a decoder (Dey and Salemt, 2017; Pan et al., 2020), along with an attention network (see Figure 1). This integration innovatively extends the capabilities of image encoding and sequential caption generation, representing a shift from conventional CNN architectures. With its ability to dynamically adapt receptive fields, the DyCNN excels at capturing features of varying scales within the CXR images. This dynamic adaptability significantly enhances the granularity of feature extraction, enabling precise representation of localized abnormalities and structural intricacies. By incorporating this flexibility into the encoding process, our model can distil meaningful and contextually rich features from the radiographic data. While the attention mechanism enables the model to selectively focus on different regions of the image during caption generation. The attention mechanism enhances the report generation process by allowing the model to assign different importance weights to different regions of the image, mimicking human perception. In parallel, the GRU-based decoder adds a critical dimension to the process by ensuring a smooth, sequential generation of captions.

Findings

The findings of this study highlight the significant advancements achieved in chest X-ray image captioning through the utilization of dynamic convolutional encoder–decoder networks (DyCNN). Experiments conducted using the IU-Chest X-ray datasets showed that the proposed model outperformed other state-of-the-art approaches. The model achieved notable scores, including a BLEU_1 score of 0.591, a BLEU_2 score of 0.347, a BLEU_3 score of 0.277 and a BLEU_4 score of 0.155. These results highlight the efficiency and efficacy of the model in producing precise radiology reports, enhancing image interpretation and clinical decision-making.

Originality/value

This work is the first of its kind, which employs DyCNN as an encoder to extract features from CXR images. In addition, GRU as the decoder for language modeling was utilized and the attention mechanisms into the model architecture were incorporated.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 8 March 2024

Wenqian Feng, Xinrong Li, Jiankun Wang, Jiaqi Wen and Hansen Li

This paper reviews the pros and cons of different parametric modeling methods, which can provide a theoretical reference for parametric reconstruction of 3D human body models for…

Abstract

Purpose

This paper reviews the pros and cons of different parametric modeling methods, which can provide a theoretical reference for parametric reconstruction of 3D human body models for virtual fitting.

Design/methodology/approach

In this study, we briefly analyze the mainstream datasets of models of the human body used in the area to provide a foundation for parametric methods of such reconstruction. We then analyze and compare parametric methods of reconstruction based on their use of the following forms of input data: point cloud data, image contours, sizes of features and points representing the joints. Finally, we summarize the advantages and problems of each method as well as the current challenges to the use of parametric modeling in virtual fitting and the opportunities provided by it.

Findings

Considering the aspects of integrity and accurate of representations of the shape and posture of the body, and the efficiency of the calculation of the requisite parameters, the reconstruction method of human body by integrating orthogonal image contour morphological features, multifeature size constraints and joint point positioning can better represent human body shape, posture and personalized feature size and has higher research value.

Originality/value

This article obtains a research thinking for reconstructing a 3D model for virtual fitting that is based on three kinds of data, which is helpful for establishing personalized and high-precision human body models.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 29 March 2024

Pratheek Suresh and Balaji Chakravarthy

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a…

Abstract

Purpose

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.

Design/methodology/approach

This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (R2) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.

Findings

The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.

Research limitations/implications

The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.

Originality/value

The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 23 January 2024

Wang Zengqing, Zheng Yu Xie and Jiang Yiling

With the rapid development of railway-intelligent video technology, scene understanding is becoming more and more important. Semantic segmentation is a major part of scene…

Abstract

Purpose

With the rapid development of railway-intelligent video technology, scene understanding is becoming more and more important. Semantic segmentation is a major part of scene understanding. There is an urgent need for an algorithm with high accuracy and real-time to meet the current railway requirements for railway identification. In response to this demand, this paper aims to explore a variety of models, accurately locate and segment important railway signs based on the improved SegNeXt algorithm, supplement the railway safety protection system and improve the intelligent level of railway safety protection.

Design/methodology/approach

This paper studies the performance of existing models on RailSem19 and explores the defects of each model through performance so as to further explore an algorithm model dedicated to railway semantic segmentation. In this paper, the authors explore the optimal solution of SegNeXt model for railway scenes and achieve the purpose of this paper by improving the encoder and decoder structure.

Findings

This paper proposes an improved SegNeXt algorithm: first, it explores the performance of various models on railways, studies the problems of semantic segmentation on railways and then analyzes the specific problems. On the basis of retaining the original excellent MSCAN encoder of SegNeXt, multiscale information fusion is used to further extract detailed features such as multihead attention and mask, solving the problem of inaccurate segmentation of current objects by the original SegNeXt algorithm. The improved algorithm is of great significance for the segmentation and recognition of railway signs.

Research limitations/implications

The model constructed in this paper has advantages in the feature segmentation of distant small objects, but it still has the problem of segmentation fracture for the railway, which is not completely segmented. In addition, in the throat area, due to the complexity of the railway, the segmentation results are not accurate.

Social implications

The identification and segmentation of railway signs based on the improved SegNeXt algorithm in this paper is of great significance for the understanding of existing railway scenes, which can greatly improve the classification and recognition ability of railway small object features and can greatly improve the degree of railway security.

Originality/value

This article introduces an enhanced version of the SegNeXt algorithm, which aims to improve the accuracy of semantic segmentation on railways. The study begins by investigating the performance of different models in railway scenarios and identifying the challenges associated with semantic segmentation on this particular domain. To address these challenges, the proposed approach builds upon the strong foundation of the original SegNeXt algorithm, leveraging techniques such as multi-scale information fusion, multi-head attention, and masking to extract finer details and enhance feature representation. By doing so, the improved algorithm effectively resolves the issue of inaccurate object segmentation encountered in the original SegNeXt algorithm. This advancement holds significant importance for the accurate recognition and segmentation of railway signage.

Details

Smart and Resilient Transportation, vol. 6 no. 1
Type: Research Article
ISSN: 2632-0487

Keywords

Article
Publication date: 28 December 2023

Ankang Ji, Xiaolong Xue, Limao Zhang, Xiaowei Luo and Qingpeng Man

Crack detection of pavement is a critical task in the periodic survey. Efficient, effective and consistent tracking of the road conditions by identifying and locating crack…

Abstract

Purpose

Crack detection of pavement is a critical task in the periodic survey. Efficient, effective and consistent tracking of the road conditions by identifying and locating crack contributes to establishing an appropriate road maintenance and repair strategy from the promptly informed managers but still remaining a significant challenge. This research seeks to propose practical solutions for targeting the automatic crack detection from images with efficient productivity and cost-effectiveness, thereby improving the pavement performance.

Design/methodology/approach

This research applies a novel deep learning method named TransUnet for crack detection, which is structured based on Transformer, combined with convolutional neural networks as encoder by leveraging a global self-attention mechanism to better extract features for enhancing automatic identification. Afterward, the detected cracks are used to quantify morphological features from five indicators, such as length, mean width, maximum width, area and ratio. Those analyses can provide valuable information for engineers to assess the pavement condition with efficient productivity.

Findings

In the training process, the TransUnet is fed by a crack dataset generated by the data augmentation with a resolution of 224 × 224 pixels. Subsequently, a test set containing 80 new images is used for crack detection task based on the best selected TransUnet with a learning rate of 0.01 and a batch size of 1, achieving an accuracy of 0.8927, a precision of 0.8813, a recall of 0.8904, an F1-measure and dice of 0.8813, and a Mean Intersection over Union of 0.8082, respectively. Comparisons with several state-of-the-art methods indicate that the developed approach in this research outperforms with greater efficiency and higher reliability.

Originality/value

The developed approach combines TransUnet with an integrated quantification algorithm for crack detection and quantification, performing excellently in terms of comparisons and evaluation metrics, which can provide solutions with potentially serving as the basis for an automated, cost-effective pavement condition assessment scheme.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 20 May 2022

Noemi Manara, Lorenzo Rosset, Francesco Zambelli, Andrea Zanola and America Califano

In the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme…

544

Abstract

Purpose

In the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme importance. In particular, in many cases, the knowledge of the outdoor/indoor microclimate may support the decision process in conservation and preservation matters of historic buildings. This knowledge is often gained by implementing long and time-consuming monitoring campaigns that allow collecting atmospheric and climatic data.

Design/methodology/approach

Sometimes the collected time series may be corrupted, incomplete and/or subjected to the sensors' errors because of the remoteness of the historic building location, the natural aging of the sensor or the lack of a continuous check of the data downloading process. For this reason, in this work, an innovative approach about reconstructing the indoor microclimate into heritage buildings, just knowing the outdoor one, is proposed. This methodology is based on using machine learning tools known as variational auto encoders (VAEs), that are able to reconstruct time series and/or to fill data gaps.

Findings

The proposed approach is implemented using data collected in Ringebu Stave Church, a Norwegian medieval wooden heritage building. Reconstructing a realistic time series, for the vast majority of the year period, of the natural internal climate of the Church has been successfully implemented.

Originality/value

The novelty of this work is discussed in the framework of the existing literature. The work explores the potentials of machine learning tools compared to traditional ones, providing a method that is able to reliably fill missing data in time series.

Details

International Journal of Building Pathology and Adaptation, vol. 42 no. 1
Type: Research Article
ISSN: 2398-4708

Keywords

1 – 10 of 50