Search results

1 – 10 of 39
Article
Publication date: 29 March 2024

Pratheek Suresh and Balaji Chakravarthy

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a…

Abstract

Purpose

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.

Design/methodology/approach

This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (R2) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.

Findings

The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.

Research limitations/implications

The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.

Originality/value

The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 29 August 2023

Hei-Chia Wang, Martinus Maslim and Hung-Yu Liu

A clickbait is a deceptive headline designed to boost ad revenue without presenting closely relevant content. There are numerous negative repercussions of clickbait, such as…

Abstract

Purpose

A clickbait is a deceptive headline designed to boost ad revenue without presenting closely relevant content. There are numerous negative repercussions of clickbait, such as causing viewers to feel tricked and unhappy, causing long-term confusion, and even attracting cyber criminals. Automatic detection algorithms for clickbait have been developed to address this issue. The fact that there is only one semantic representation for the same term and a limited dataset in Chinese is a need for the existing technologies for detecting clickbait. This study aims to solve the limitations of automated clickbait detection in the Chinese dataset.

Design/methodology/approach

This study combines both to train the model to capture the probable relationship between clickbait news headlines and news content. In addition, part-of-speech elements are used to generate the most appropriate semantic representation for clickbait detection, improving clickbait detection performance.

Findings

This research successfully compiled a dataset containing up to 20,896 Chinese clickbait news articles. This collection contains news headlines, articles, categories and supplementary metadata. The suggested context-aware clickbait detection (CA-CD) model outperforms existing clickbait detection approaches on many criteria, demonstrating the proposed strategy's efficacy.

Originality/value

The originality of this study resides in the newly compiled Chinese clickbait dataset and contextual semantic representation-based clickbait detection approach employing transfer learning. This method can modify the semantic representation of each word based on context and assist the model in more precisely interpreting the original meaning of news articles.

Details

Data Technologies and Applications, vol. 58 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 25 April 2024

Abdul-Manan Sadick, Argaw Gurmu and Chathuri Gunarathna

Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is…

Abstract

Purpose

Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is qualitative, posing additional challenges to achieving accurate cost estimates. Additionally, there is a lack of tools that use qualitative project information and forecast the budgets required for project completion. This research, therefore, aims to develop a model for setting project budgets (excluding land) during the pre-conceptual stage of residential buildings, where project information is mainly qualitative.

Design/methodology/approach

Due to the qualitative nature of project information at the pre-conception stage, a natural language processing model, DistilBERT (Distilled Bidirectional Encoder Representations from Transformers), was trained to predict the cost range of residential buildings at the pre-conception stage. The training and evaluation data included 63,899 building permit activity records (2021–2022) from the Victorian State Building Authority, Australia. The input data comprised the project description of each record, which included project location and basic material types (floor, frame, roofing, and external wall).

Findings

This research designed a novel tool for predicting the project budget based on preliminary project information. The model achieved 79% accuracy in classifying residential buildings into three cost_classes ($100,000-$300,000, $300,000-$500,000, $500,000-$1,200,000) and F1-scores of 0.85, 0.73, and 0.74, respectively. Additionally, the results show that the model learnt the contextual relationship between qualitative data like project location and cost.

Research limitations/implications

The current model was developed using data from Victoria state in Australia; hence, it would not return relevant outcomes for other contexts. However, future studies can adopt the methods to develop similar models for their context.

Originality/value

This research is the first to leverage a deep learning model, DistilBERT, for cost estimation at the pre-conception stage using basic project information like location and material types. Therefore, the model would contribute to overcoming data limitations for cost estimation at the pre-conception stage. Residential building stakeholders, like clients, designers, and estimators, can use the model to forecast the project budget at the pre-conception stage to facilitate decision-making.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 15 February 2024

Xinyu Liu, Kun Ma, Ke Ji, Zhenxiang Chen and Bo Yang

Propaganda is a prevalent technique used in social media to intentionally express opinions or actions with the aim of manipulating or deceiving users. Existing methods for…

Abstract

Purpose

Propaganda is a prevalent technique used in social media to intentionally express opinions or actions with the aim of manipulating or deceiving users. Existing methods for propaganda detection primarily focus on capturing language features within its content. However, these methods tend to overlook the information presented within the external news environment from which propaganda news originated and spread. This news environment reflects recent mainstream media opinions and public attention and contains language characteristics of non-propaganda news. Therefore, the authors have proposed a graph-based multi-information integration network with an external news environment (abbreviated as G-MINE) for propaganda detection.

Design/methodology/approach

G-MINE is proposed to comprise four parts: textual information extraction module, external news environment perception module, multi-information integration module and classifier. Specifically, the external news environment perception module and multi-information integration module extract and integrate the popularity and novelty into the textual information and capture the high-order complementary information between them.

Findings

G-MINE achieves state-of-the-art performance on both the TSHP-17, Qprop and the PTC data sets, with an accuracy of 98.24%, 90.59% and 97.44%, respectively.

Originality/value

An external news environment perception module is proposed to capture the popularity and novelty information, and a multi-information integration module is proposed to effectively fuse them with the textual information.

Details

International Journal of Web Information Systems, vol. 20 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 March 2024

Wei-Zhen Wang, Hong-Mei Xiao and Yuan Fang

Nowadays, artificial intelligence (AI) technology has demonstrated extensive applications in the field of art design. Attribute editing is an important means to realize clothing…

Abstract

Purpose

Nowadays, artificial intelligence (AI) technology has demonstrated extensive applications in the field of art design. Attribute editing is an important means to realize clothing style and color design via computer language, which aims to edit and control the garment image based on the specified target attributes while preserving other details from the original image. The current image attribute editing model often generates images containing missing or redundant attributes. To address the problem, this paper aims for a novel design method utilizing the Fashion-attribute generative adversarial network (AttGAN) model was proposed for image attribute editing specifically tailored to women’s blouses.

Design/methodology/approach

The proposed design method primarily focuses on optimizing the feature extraction network and loss function. To enhance the feature extraction capability of the model, an increase in the number of layers in the feature extraction network was implemented, and the structure similarity index measure (SSIM) loss function was employed to ensure the independent attributes of the original image were consistent. The characteristic-preserving virtual try-on network (CP_VTON) dataset was used for train-ing to enable the editing of sleeve length and color specifically for women’s blouse.

Findings

The experimental results demonstrate that the optimization model’s generated outputs have significantly reduced problems related to missing attributes or visual redundancy. Through a comparative analysis of the numerical changes in the SSIM and peak signal-to-noise ratio (PSNR) before and after the model refinement, it was observed that the improved SSIM increased substantially by 27.4%, and the PSNR increased by 2.8%, serving as empirical evidence of the effectiveness of incorporating the SSIM loss function.

Originality/value

The proposed algorithm provides a promising tool for precise image editing of women’s blouses based on the GAN. This introduces a new approach to eliminate semantic expression errors in image editing, thereby contributing to the development of AI in clothing design.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 11 January 2024

Yuepeng Zhang, Guangzhong Cao, Linglong Li and Dongfeng Diao

The purpose of this paper is to design a new trajectory error compensation method to improve the trajectory tracking performance and compliance of the knee exoskeleton in…

Abstract

Purpose

The purpose of this paper is to design a new trajectory error compensation method to improve the trajectory tracking performance and compliance of the knee exoskeleton in human–exoskeleton interaction motion.

Design/methodology/approach

A trajectory error compensation method based on admittance-extended Kalman filter (AEKF) error fusion for human–exoskeleton interaction control. The admittance controller is used to calculate the trajectory error adjustment through the feedback human–exoskeleton interaction force, and the actual trajectory error is obtained through the encoder feedback of exoskeleton and the designed trajectory. By using the fusion and prediction characteristics of EKF, the calculated trajectory error adjustment and the actual error are fused to obtain a new trajectory error compensation, which is feedback to the knee exoskeleton controller. This method is designed to be capable of improving the trajectory tracking performance of the knee exoskeleton and enhancing the compliance of knee exoskeleton interaction.

Findings

Six volunteers conducted comparative experiments on four different motion frequencies. The experimental results show that this method can effectively improve the trajectory tracking performance and compliance of the knee exoskeleton in human–exoskeleton interaction.

Originality/value

The AEKF method first uses the data fusion idea to fuse the estimated error with measurement errors, obtaining more accurate trajectory error compensation for the knee exoskeleton motion control. This work provides great benefits for the trajectory tracking performance and compliance of lower limb exoskeletons in human–exoskeleton interaction movements.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

2941

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 2 April 2024

R.S. Vignesh and M. Monica Subashini

An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories…

Abstract

Purpose

An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories is different and also, there is insufficiency of high-scale databases for training. The purpose of the study is to provide high security.

Design/methodology/approach

In this research, optimization-assisted federated learning (FL) is introduced for thermoplastic waste segregation and classification. The deep learning (DL) network trained by Archimedes Henry gas solubility optimization (AHGSO) is used for the classification of plastic and resin types. The deep quantum neural networks (DQNN) is used for first-level classification and the deep max-out network (DMN) is employed for second-level classification. This developed AHGSO is obtained by blending the features of Archimedes optimization algorithm (AOA) and Henry gas solubility optimization (HGSO). The entities included in this approach are nodes and servers. Local training is carried out depending on local data and updations to the server are performed. Then, the model is aggregated at the server. Thereafter, each node downloads the global model and the update training is executed depending on the downloaded global and the local model till it achieves the satisfied condition. Finally, local update and aggregation at the server is altered based on the average method. The Data tag suite (DATS_2022) dataset is used for multilevel thermoplastic waste segregation and classification.

Findings

By using the DQNN in first-level classification the designed optimization-assisted FL has gained an accuracy of 0.930, mean average precision (MAP) of 0.933, false positive rate (FPR) of 0.213, loss function of 0.211, mean square error (MSE) of 0.328 and root mean square error (RMSE) of 0.572. In the second level classification, by using DMN the accuracy, MAP, FPR, loss function, MSE and RMSE are 0.932, 0.935, 0.093, 0.068, 0.303 and 0.551.

Originality/value

The multilevel thermoplastic waste segregation and classification using the proposed model is accurate and improves the effectiveness of the classification.

Article
Publication date: 16 April 2024

Guilherme Homrich, Aly Ferreira Flores Filho, Paulo Roberto Eckert and David George Dorrell

This paper aims to introduce an alternative for modeling levitation forces between NdFeB magnets and bulks of high-temperature superconductors (HTS). The presented approach should…

Abstract

Purpose

This paper aims to introduce an alternative for modeling levitation forces between NdFeB magnets and bulks of high-temperature superconductors (HTS). The presented approach should be evaluated through two different formulations and compared with experimental results.

Design/methodology/approach

The T-A and H-ϕ formulations are among the most efficient approaches for modeling superconducting materials. COMSOL Multiphysics was used to apply them to magnetic levitation models and predict the forces involved.The permanent magnet movement is modeled by combining moving meshes and magnetic field identity pairs in both 2D and 3D studies.

Findings

It is shown that it is possible to use the homogenization technique for the T-A formulation in 3D models combined with mixed formulation boundaries and moving meshes to simulate the whole device’s geometry.

Research limitations/implications

The case studies are limited to the formulations’ implementation and a brief assessment regarding degrees of freedom. The intent is to make the simulation straightforward rather than establish a benchmark.

Originality/value

The H-ϕ formulation considers the HTS bulk domain as isotropic, whereas the T-A formulation homogenization approach treats it as anisotropic. The originality of the paper lies in contrasting these different modeling approaches while incorporating the external magnetic field movement by means of the Lagrangian–Eulerian method.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 8 March 2024

Wenqian Feng, Xinrong Li, Jiankun Wang, Jiaqi Wen and Hansen Li

This paper reviews the pros and cons of different parametric modeling methods, which can provide a theoretical reference for parametric reconstruction of 3D human body models for…

Abstract

Purpose

This paper reviews the pros and cons of different parametric modeling methods, which can provide a theoretical reference for parametric reconstruction of 3D human body models for virtual fitting.

Design/methodology/approach

In this study, we briefly analyze the mainstream datasets of models of the human body used in the area to provide a foundation for parametric methods of such reconstruction. We then analyze and compare parametric methods of reconstruction based on their use of the following forms of input data: point cloud data, image contours, sizes of features and points representing the joints. Finally, we summarize the advantages and problems of each method as well as the current challenges to the use of parametric modeling in virtual fitting and the opportunities provided by it.

Findings

Considering the aspects of integrity and accurate of representations of the shape and posture of the body, and the efficiency of the calculation of the requisite parameters, the reconstruction method of human body by integrating orthogonal image contour morphological features, multifeature size constraints and joint point positioning can better represent human body shape, posture and personalized feature size and has higher research value.

Originality/value

This article obtains a research thinking for reconstructing a 3D model for virtual fitting that is based on three kinds of data, which is helpful for establishing personalized and high-precision human body models.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

1 – 10 of 39