Search results

1 – 10 of 20
Open Access
Article
Publication date: 13 March 2024

Tjaša Redek and Uroš Godnov

The Internet has changed consumer decision-making and influenced business behaviour. User-generated product information is abundant and readily available. This paper argues that…

Abstract

Purpose

The Internet has changed consumer decision-making and influenced business behaviour. User-generated product information is abundant and readily available. This paper argues that user-generated content can be efficiently utilised for business intelligence using data science and develops an approach to demonstrate the methods and benefits of the different techniques.

Design/methodology/approach

Using Python Selenium, Beautiful Soup and various text mining approaches in R to access, retrieve and analyse user-generated content, we argue that (1) companies can extract information about the product attributes that matter most to consumers and (2) user-generated reviews enable the use of text mining results in combination with other demographic and statistical information (e.g. ratings) as an efficient input for competitive analysis.

Findings

The paper shows that combining different types of data (textual and numerical data) and applying and combining different methods can provide organisations with important business information and improve business performance.

Research limitations/implications

The paper shows that combining different types of data (textual and numerical data) and applying and combining different methods can provide organisations with important business information and improve business performance.

Originality/value

The study makes several contributions to the marketing and management literature, mainly by illustrating the methodological advantages of text mining and accompanying statistical analysis, the different types of distilled information and their use in decision-making.

Details

Kybernetes, vol. 53 no. 13
Type: Research Article
ISSN: 0368-492X

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

2599

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Open Access
Article
Publication date: 5 February 2024

Krištof Kovačič, Jurij Gregorc and Božidar Šarler

This study aims to develop an experimentally validated three-dimensional numerical model for predicting different flow patterns produced with a gas dynamic virtual nozzle (GDVN).

Abstract

Purpose

This study aims to develop an experimentally validated three-dimensional numerical model for predicting different flow patterns produced with a gas dynamic virtual nozzle (GDVN).

Design/methodology/approach

The physical model is posed in the mixture formulation and copes with the unsteady, incompressible, isothermal, Newtonian, low turbulent two-phase flow. The computational fluid dynamics numerical solution is based on the half-space finite volume discretisation. The geo-reconstruct volume-of-fluid scheme tracks the interphase boundary between the gas and the liquid. To ensure numerical stability in the transition regime and adequately account for turbulent behaviour, the k-ω shear stress transport turbulence model is used. The model is validated by comparison with the experimental measurements on a vertical, downward-positioned GDVN configuration. Three different combinations of air and water volumetric flow rates have been solved numerically in the range of Reynolds numbers for airflow 1,009–2,596 and water 61–133, respectively, at Weber numbers 1.2–6.2.

Findings

The half-space symmetry allows the numerical reconstruction of the dripping, jetting and indication of the whipping mode. The kinetic energy transfer from the gas to the liquid is analysed, and locations with locally increased gas kinetic energy are observed. The calculated jet shapes reasonably well match the experimentally obtained high-speed camera videos.

Practical implications

The model is used for the virtual studies of new GDVN nozzle designs and optimisation of their operation.

Originality/value

To the best of the authors’ knowledge, the developed model numerically reconstructs all three GDVN flow regimes for the first time.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 4
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 22 March 2024

Shahin Alipour Bonab, Alireza Sadeghi and Mohammad Yazdani-Asrami

The ionization of the air surrounding the phase conductor in high-voltage transmission lines results in a phenomenon known as the Corona effect. To avoid this, Corona rings are…

Abstract

Purpose

The ionization of the air surrounding the phase conductor in high-voltage transmission lines results in a phenomenon known as the Corona effect. To avoid this, Corona rings are used to dampen the electric field imposed on the insulator. The purpose of this study is to present a fast and intelligent surrogate model for determination of the electric field imposed on the surface of a 120 kV composite insulator, in presence of the Corona ring.

Design/methodology/approach

Usually, the structural design parameters of the Corona ring are selected through an optimization procedure combined with some numerical simulations such as finite element method (FEM). These methods are slow and computationally expensive and thus, extremely reducing the speed of optimization problems. In this paper, a novel surrogate model was proposed that could calculate the maximum electric field imposed on a ceramic insulator in a 120 kV line. The surrogate model was created based on the different scenarios of height, radius and inner radius of the Corona ring, as the inputs of the model, while the maximum electric field on the body of the insulator was considered as the output.

Findings

The proposed model was based on artificial intelligence techniques that have high accuracy and low computational time. Three methods were used here to develop the AI-based surrogate model, namely, Cascade forward neural network (CFNN), support vector regression and K-nearest neighbors regression. The results indicated that the CFNN has the highest accuracy among these methods with 99.81% R-squared and only 0.045468 root mean squared error while the testing time is less than 10 ms.

Originality/value

To the best of the authors’ knowledge, for the first time, a surrogate method is proposed for the prediction of the maximum electric field imposed on the high voltage insulators in the presence Corona ring which is faster than any conventional finite element method.

Details

World Journal of Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1708-5284

Keywords

Open Access
Article
Publication date: 19 December 2023

Marcin Nowak, Marta Pawłowska-Nowak, Małgorzata Kokocińska and Piotr Kułyk

With the use of the grey incidence analysis (GIA), indicators such as the absolute degree of grey incidence (εij), relative degree of grey incidence (rij) or synthetic degree of…

214

Abstract

Purpose

With the use of the grey incidence analysis (GIA), indicators such as the absolute degree of grey incidence (εij), relative degree of grey incidence (rij) or synthetic degree of grey incidence (ρij) are calculated. However, it seems that some assumptions made to calculate them are arguable, which may also have a material impact on the reliability of test results. In this paper, the authors analyse one of the indicators of the GIA, namely the relative degree of grey incidence. The aim of the article was to verify the hypothesis: in determining the relative degree of grey incidence, the method of standardisation of elements in a series significantly affects the test results.

Design/methodology/approach

To achieve the purpose of the article, the authors used the numerical simulation method and the logical analysis method (in order to draw conclusions from our tests).

Findings

It turned out that the applied method of standardising elements in series when calculating the relative degree of grey incidence significantly affects the test results. Moreover, the manner of standardisation used in the original method (which involves dividing all elements by the first element) is not the best. Much more reliable results are obtained by a standardisation that involves dividing all elements by their arithmetic mean.

Research limitations/implications

Limitations of the conducted evaluation involve in particular the limited scope of inference. This is since the obtained results referred to only one of the indicators classified into the GIA.

Originality/value

In this article, the authors have evaluated the model of GIA in which the relative degree of grey incidence is determined. As a result of the research, the authors have proposed a recommendation regarding a change in the method of standardising variables, which will contribute to obtaining more reliable results in relational tests using the grey system theory.

Details

Grey Systems: Theory and Application, vol. 14 no. 2
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 29 March 2024

Pratheek Suresh and Balaji Chakravarthy

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a…

Abstract

Purpose

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.

Design/methodology/approach

This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (R2) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.

Findings

The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.

Research limitations/implications

The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.

Originality/value

The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 22 June 2022

Serena Summa, Alex Mircoli, Domenico Potena, Giulia Ulpiani, Claudia Diamantini and Costanzo Di Perna

Nearly 75% of EU buildings are not energy-efficient enough to meet the international climate goals, which triggers the need to develop sustainable construction techniques with…

1062

Abstract

Purpose

Nearly 75% of EU buildings are not energy-efficient enough to meet the international climate goals, which triggers the need to develop sustainable construction techniques with high degree of resilience against climate change. In this context, a promising construction technique is represented by ventilated façades (VFs). This paper aims to propose three different VFs and the authors define a novel machine learning-based approach to evaluate and predict their energy performance under different boundary conditions, without the need for expensive on-site experimentations

Design/methodology/approach

The approach is based on the use of machine learning algorithms for the evaluation of different VF configurations and allows for the prediction of the temperatures in the cavities and of the heat fluxes. The authors trained different regression algorithms and obtained low prediction errors, in particular for temperatures. The authors used such models to simulate the thermo-physical behavior of the VFs and determined the most energy-efficient design variant.

Findings

The authors found that regression trees allow for an accurate simulation of the thermal behavior of VFs. The authors also studied feature weights to determine the most relevant thermo-physical parameters. Finally, the authors determined the best design variant and the optimal air velocity in the cavity.

Originality/value

This study is unique in four main aspects: the thermo-dynamic analysis is performed under different thermal masses, positions of the cavity and geometries; the VFs are mated with a controlled ventilation system, used to parameterize the thermodynamic behavior under stepwise variations of the air inflow; temperatures and heat fluxes are predicted through machine learning models; the best configuration is determined through simulations, with no onerous in situ experimentations needed.

Details

Construction Innovation , vol. 24 no. 7
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 9 January 2024

Kaizheng Zhang, Jian Di, Jiulong Wang, Xinghu Wang and Haibo Ji

Many existing trajectory optimization algorithms use parameters like maximum velocity or acceleration to formulate constraints. Due to the ignoring of the quadrotor actual…

Abstract

Purpose

Many existing trajectory optimization algorithms use parameters like maximum velocity or acceleration to formulate constraints. Due to the ignoring of the quadrotor actual tracking capability, the generated trajectories may not be suitable for tracking control. The purpose of this paper is to design an online adjustment algorithm to improve the overall quadrotor trajectory tracking performance.

Design/methodology/approach

The authors propose a reference trajectory resampling layer (RTRL) to dynamically adjust the reference signals according to the current tracking status and future tracking risks. First, the authors design a risk-aware tracking monitor that uses the Frenét tracking errors and the curvature and torsion of the reference trajectory to evaluate tracking risks. Then, the authors propose an online adjusting algorithm by using the time scaling method.

Findings

The proposed RTRL is shown to be effective in improving the quadrotor trajectory tracking accuracy by both simulation and experiment results.

Originality/value

Infeasible reference trajectories may cause serious accidents for autonomous quadrotors. The results of this paper can improve the safety of autonomous quadrotor in application.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 8 February 2024

Juho Park, Junghwan Cho, Alex C. Gang, Hyun-Woo Lee and Paul M. Pedersen

This study aims to identify an automated machine learning algorithm with high accuracy that sport practitioners can use to identify the specific factors for predicting Major…

Abstract

Purpose

This study aims to identify an automated machine learning algorithm with high accuracy that sport practitioners can use to identify the specific factors for predicting Major League Baseball (MLB) attendance. Furthermore, by predicting spectators for each league (American League and National League) and division in MLB, the authors will identify the specific factors that increase accuracy, discuss them and provide implications for marketing strategies for academics and practitioners in sport.

Design/methodology/approach

This study used six years of daily MLB game data (2014–2019). All data were collected as predictors, such as game performance, weather and unemployment rate. Also, the attendance rate was obtained as an observation variable. The Random Forest, Lasso regression models and XGBoost were used to build the prediction model, and the analysis was conducted using Python 3.7.

Findings

The RMSE value was 0.14, and the R2 was 0.62 as a consequence of fine-tuning the tuning parameters of the XGBoost model, which had the best performance in forecasting the attendance rate. The most influential variables in the model are “Rank” of 0.247 and “Day of the week”, “Home team” and “Day/Night game” were shown as influential variables in order. The result was shown that the “Unemployment rate”, as a macroeconomic factor, has a value of 0.06 and weather factors were a total value of 0.147.

Originality/value

This research highlights unemployment rate as a determinant affecting MLB game attendance rates. Beyond contextual elements such as climate, the findings of this study underscore the significance of economic factors, particularly unemployment rates, necessitating further investigation into these factors to gain a more comprehensive understanding of game attendance.

Details

International Journal of Sports Marketing and Sponsorship, vol. 25 no. 2
Type: Research Article
ISSN: 1464-6668

Keywords

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 20