Search results

1 – 10 of 75
Open Access
Article
Publication date: 3 August 2020

Djordje Cica, Branislav Sredanovic, Sasa Tesic and Davorin Kramar

Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with…

2117

Abstract

Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with cutting fluids, the machining industries are continuously developing technologies and systems for cooling/lubricating of the cutting zone while maintaining machining efficiency. In the present study, three regression based machine learning techniques, namely, polynomial regression (PR), support vector regression (SVR) and Gaussian process regression (GPR) were developed to predict machining force, cutting power and cutting pressure in the turning of AISI 1045. In the development of predictive models, machining parameters of cutting speed, depth of cut and feed rate were considered as control factors. Since cooling/lubricating techniques significantly affects the machining performance, prediction model development of quality characteristics was performed under minimum quantity lubrication (MQL) and high-pressure coolant (HPC) cutting conditions. The prediction accuracy of developed models was evaluated by statistical error analyzing methods. Results of regressions based machine learning techniques were also compared with probably one of the most frequently used machine learning method, namely artificial neural networks (ANN). Finally, a metaheuristic approach based on a neural network algorithm was utilized to perform an efficient multi-objective optimization of process parameters for both cutting environment.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 18 October 2023

Ivan Soukal, Jan Mačí, Gabriela Trnková, Libuse Svobodova, Martina Hedvičáková, Eva Hamplova, Petra Maresova and Frank Lefley

The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest…

Abstract

Purpose

The primary purpose of this paper is to identify the so-called core authors and their publications according to pre-defined criteria and thereby direct the users to the fastest and easiest way to get a picture of the otherwise pervasive field of bankruptcy prediction models. The authors aim to present state-of-the-art bankruptcy prediction models assembled by the field's core authors and critically examine the approaches and methods adopted.

Design/methodology/approach

The authors conducted a literature search in November 2022 through scientific databases Scopus, ScienceDirect and the Web of Science, focussing on a publication period from 2010 to 2022. The database search query was formulated as “Bankruptcy Prediction” and “Model or Tool”. However, the authors intentionally did not specify any model or tool to make the search non-discriminatory. The authors reviewed over 7,300 articles.

Findings

This paper has addressed the research questions: (1) What are the most important publications of the core authors in terms of the target country, size of the sample, sector of the economy and specialization in SME? (2) What are the most used methods for deriving or adjusting models appearing in the articles of the core authors? (3) To what extent do the core authors include accounting-based variables, non-financial or macroeconomic indicators, in their prediction models? Despite the advantages of new-age methods, based on the information in the articles analyzed, it can be deduced that conventional methods will continue to be beneficial, mainly due to the higher degree of ease of use and the transferability of the derived model.

Research limitations/implications

The authors identify several gaps in the literature which this research does not address but could be the focus of future research.

Practical implications

The authors provide practitioners and academics with an extract from a wide range of studies, available in scientific databases, on bankruptcy prediction models or tools, resulting in a large number of records being reviewed. This research will interest shareholders, corporations, and financial institutions interested in models of financial distress prediction or bankruptcy prediction to help identify troubled firms in the early stages of distress.

Social implications

Bankruptcy is a major concern for society in general, especially in today's economic environment. Therefore, being able to predict possible business failure at an early stage will give an organization time to address the issue and maybe avoid bankruptcy.

Originality/value

To the authors' knowledge, this is the first paper to identify the core authors in the bankruptcy prediction model and methods field. The primary value of the study is the current overview and analysis of the theoretical and practical development of knowledge in this field in the form of the construction of new models using classical or new-age methods. Also, the paper adds value by critically examining existing models and their modifications, including a discussion of the benefits of non-accounting variables usage.

Details

Central European Management Journal, vol. 32 no. 1
Type: Research Article
ISSN: 2658-0845

Keywords

Open Access
Article
Publication date: 28 June 2022

Yahya Alnashri and Hasan Alzubaidi

The main purpose of this paper is to introduce the gradient discretisation method (GDM) to a system of reaction diffusion equations subject to non-homogeneous Dirichlet boundary…

Abstract

Purpose

The main purpose of this paper is to introduce the gradient discretisation method (GDM) to a system of reaction diffusion equations subject to non-homogeneous Dirichlet boundary conditions. Then, the authors show that the GDM provides a comprehensive convergence analysis of several numerical methods for the considered model. The convergence is established without non-physical regularity assumptions on the solutions.

Design/methodology/approach

In this paper, the authors use the GDM to discretise a system of reaction diffusion equations with non-homogeneous Dirichlet boundary conditions.

Findings

The authors provide a generic convergence analysis of a system of reaction diffusion equations. The authors introduce a specific example of numerical scheme that fits in the gradient discretisation method. The authors conduct a numerical test to measure the efficiency of the proposed method.

Originality/value

This work provides a unified convergence analysis of several numerical methods for a system of reaction diffusion equations. The generic convergence is proved under the classical assumptions on the solutions.

Open Access
Article
Publication date: 31 May 2023

Xiaojie Xu and Yun Zhang

For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction…

Abstract

Purpose

For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction problem based on the CSI300 nearby futures by using high-frequency data recorded each minute from the launch date of the futures to roughly two years after constituent stocks of the futures all becoming shortable, a time period witnessing significantly increased trading activities.

Design/methodology/approach

In order to answer questions as follows, this study adopts the neural network for modeling the irregular trading volume series of the CSI300 nearby futures: are the research able to utilize the lags of the trading volume series to make predictions; if this is the case, how far can the predictions go and how accurate can the predictions be; can this research use predictive information from trading volumes of the CSI300 spot and first distant futures for improving prediction accuracy and what is the corresponding magnitude; how sophisticated is the model; and how robust are its predictions?

Findings

The results of this study show that a simple neural network model could be constructed with 10 hidden neurons to robustly predict the trading volume of the CSI300 nearby futures using 1–20 min ahead trading volume data. The model leads to the root mean square error of about 955 contracts. Utilizing additional predictive information from trading volumes of the CSI300 spot and first distant futures could further benefit prediction accuracy and the magnitude of improvements is about 1–2%. This benefit is particularly significant when the trading volume of the CSI300 nearby futures is close to be zero. Another benefit, at the cost of the model becoming slightly more sophisticated with more hidden neurons, is that predictions could be generated through 1–30 min ahead trading volume data.

Originality/value

The results of this study could be used for multiple purposes, including designing financial index trading systems and platforms, monitoring systematic financial risks and building financial index price forecasting.

Details

Asian Journal of Economics and Banking, vol. 8 no. 1
Type: Research Article
ISSN: 2615-9821

Keywords

Open Access
Article
Publication date: 29 July 2020

Mahmood Al-khassaweneh and Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including…

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Content available
Article
Publication date: 6 November 2023

Muneza Kagzi, Sayantan Khanra and Sanjoy Kumar Paul

From a technological determinist perspective, machine learning (ML) may significantly contribute towards sustainable development. The purpose of this study is to synthesize prior…

Abstract

Purpose

From a technological determinist perspective, machine learning (ML) may significantly contribute towards sustainable development. The purpose of this study is to synthesize prior literature on the role of ML in promoting sustainability and to encourage future inquiries.

Design/methodology/approach

This study conducts a systematic review of 110 papers that demonstrate the utilization of ML in the context of sustainable development.

Findings

ML techniques may play a vital role in enabling sustainable development by leveraging data to uncover patterns and facilitate the prediction of various variables, thereby aiding in decision-making processes. Through the synthesis of findings from prior research, it is evident that ML may help in achieving many of the United Nations’ sustainable development goals.

Originality/value

This study represents one of the initial investigations that conducted a comprehensive examination of the literature concerning ML’s contribution to sustainability. The analysis revealed that the research domain is still in its early stages, indicating a need for further exploration.

Details

Journal of Systems and Information Technology, vol. 25 no. 4
Type: Research Article
ISSN: 1328-7265

Keywords

Open Access
Article
Publication date: 22 May 2023

Edmund Baffoe-Twum, Eric Asa and Bright Awuku

Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the…

Abstract

Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the mining industry; however, it has been successfully applied in diverse scientific disciplines. This technique includes univariate, multivariate, and simulations. Kriging geostatistical methods, simple, ordinary, and universal Kriging, are not multivariate models in the usual statistical function. Notwithstanding, simple, ordinary, and universal kriging techniques utilize random function models that include unlimited random variables while modeling one attribute. The coKriging technique is a multivariate estimation method that simultaneously models two or more attributes defined with the same domains as coregionalization.

Objective: This study investigates the impact of populations on traffic volumes as a variable. The additional variable determines the strength or accuracy obtained when data integration is adopted. In addition, this is to help improve the estimation of annual average daily traffic (AADT).

Methods procedures, process: The investigation adopts the coKriging technique with AADT data from 2009 to 2016 from Montana, Minnesota, and Washington as primary attributes and population as a controlling factor (second variable). CK is implemented for this study after reviewing the literature and work completed by comparing it with other geostatistical methods.

Results, observations, and conclusions: The Investigation employed two variables. The data integration methods employed in CK yield more reliable models because their strength is drawn from multiple variables. The cross-validation results of the model types explored with the CK technique successfully evaluate the interpolation technique's performance and help select optimal models for each state. The results from Montana and Minnesota models accurately represent the states' traffic and population density. The Washington model had a few exceptions. However, the secondary attribute helped yield an accurate interpretation. Consequently, the impact of tourism, shopping, recreation centers, and possible transiting patterns throughout the state is worth exploring.

Details

Emerald Open Research, vol. 1 no. 5
Type: Research Article
ISSN: 2631-3952

Keywords

Open Access
Article
Publication date: 22 June 2022

Serena Summa, Alex Mircoli, Domenico Potena, Giulia Ulpiani, Claudia Diamantini and Costanzo Di Perna

Nearly 75% of EU buildings are not energy-efficient enough to meet the international climate goals, which triggers the need to develop sustainable construction techniques with…

1109

Abstract

Purpose

Nearly 75% of EU buildings are not energy-efficient enough to meet the international climate goals, which triggers the need to develop sustainable construction techniques with high degree of resilience against climate change. In this context, a promising construction technique is represented by ventilated façades (VFs). This paper aims to propose three different VFs and the authors define a novel machine learning-based approach to evaluate and predict their energy performance under different boundary conditions, without the need for expensive on-site experimentations

Design/methodology/approach

The approach is based on the use of machine learning algorithms for the evaluation of different VF configurations and allows for the prediction of the temperatures in the cavities and of the heat fluxes. The authors trained different regression algorithms and obtained low prediction errors, in particular for temperatures. The authors used such models to simulate the thermo-physical behavior of the VFs and determined the most energy-efficient design variant.

Findings

The authors found that regression trees allow for an accurate simulation of the thermal behavior of VFs. The authors also studied feature weights to determine the most relevant thermo-physical parameters. Finally, the authors determined the best design variant and the optimal air velocity in the cavity.

Originality/value

This study is unique in four main aspects: the thermo-dynamic analysis is performed under different thermal masses, positions of the cavity and geometries; the VFs are mated with a controlled ventilation system, used to parameterize the thermodynamic behavior under stepwise variations of the air inflow; temperatures and heat fluxes are predicted through machine learning models; the best configuration is determined through simulations, with no onerous in situ experimentations needed.

Details

Construction Innovation , vol. 24 no. 7
Type: Research Article
ISSN: 1471-4175

Keywords

Content available
Article
Publication date: 1 August 2023

Elham Mahamedi, Martin Wonders, Nima Gerami Seresht, Wai Lok Woo and Mohamad Kassem

The purpose of this paper is to propose a novel data-driven approach for predicting energy performance of buildings that can address the scarcity of quality data, and consider the…

75

Abstract

Purpose

The purpose of this paper is to propose a novel data-driven approach for predicting energy performance of buildings that can address the scarcity of quality data, and consider the dynamic nature of building systems.

Design/methodology/approach

This paper proposes a reinforcing machine learning (ML) approach based on transfer learning (TL) to address these challenges. The proposed approach dynamically incorporates the data captured by the building management systems into the model to improve its accuracy.

Findings

It was shown that the proposed approach could improve the accuracy of the energy performance prediction compared to the conventional TL (non-reinforcing) approach by 19 percentage points in mean absolute percentage error.

Research limitations/implications

The case study results confirm the practicality of the proposed approach and show that it outperforms the standard ML approach (with no transferred knowledge) when little data is available.

Originality/value

This approach contributes to the body of knowledge by addressing the limited data availability in the building sector using TL; and accounting for the dynamics of buildings’ energy performance by the reinforcing architecture. The proposed approach is implemented in a case study project based in London, UK.

Details

Construction Innovation , vol. 24 no. 1
Type: Research Article
ISSN: 1471-4175

Keywords

Open Access
Article
Publication date: 9 November 2023

Abdulmohsen S. Almohsen, Naif M. Alsanabani, Abdullah M. Alsugair and Khalid S. Al-Gahtani

The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the…

Abstract

Purpose

The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the quality of the owner's estimation for predicting precisely the contract cost at the pre-tendering phase and avoiding future issues that arise through the construction phase.

Design/methodology/approach

This paper integrated artificial neural networks (ANN), deep neural networks (DNN) and time series (TS) techniques to estimate the ratio of a low bid to the OEC (R) for different size contracts and three types of contracts (building, electric and mechanic) accurately based on 94 contracts from King Saud University. The ANN and DNN models were evaluated using mean absolute percentage error (MAPE), mean sum square error (MSSE) and root mean sums square error (RMSSE).

Findings

The main finding is that the ANN provides high accuracy with MAPE, MSSE and RMSSE a 2.94%, 0.0015 and 0.039, respectively. The DNN's precision was high, with an RMSSE of 0.15 on average.

Practical implications

The owner and consultant are expected to use the study's findings to create more accuracy of the owner's estimate and decrease the difference between the owner's estimate and the lowest submitted offer for better decision-making.

Originality/value

This study fills the knowledge gap by developing an ANN model to handle missing TS data and forecasting the difference between a low bid and an OEC at the pre-tendering phase.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 13
Type: Research Article
ISSN: 0969-9988

Keywords

Access

Only content I have access to

Year

Last 6 months (75)

Content type

1 – 10 of 75