Search results

1 – 10 of 38
Open Access
Article
Publication date: 26 May 2023

Mpho Trinity Manenzhe, Arnesh Telukdarie and Megashnee Munsamy

The purpose of this paper is to propose a system dynamic simulated process model for maintenance work management incorporating the Fourth Industrial Revolution (4IR) technologies.

1939

Abstract

Purpose

The purpose of this paper is to propose a system dynamic simulated process model for maintenance work management incorporating the Fourth Industrial Revolution (4IR) technologies.

Design/methodology/approach

The extant literature in physical assets maintenance depicts that poor maintenance management is predominantly because of a lack of a clearly defined maintenance work management process model, resulting in poor management of maintenance work. This paper solves this complex phenomenon using a combination of conceptual process modeling and system dynamics simulation incorporating 4IR technologies. A process for maintenance work management and its control actions on scheduled maintenance tasks versus unscheduled maintenance tasks is modeled, replicating real-world scenarios with a digital lens (4IR technologies) for predictive maintenance strategy.

Findings

A process for maintenance work management is thus modeled and simulated as a dynamic system. Post-model validation, this study reveals that the real-world maintenance work management process can be replicated using system dynamics modeling. The impact analysis of 4IR technologies on maintenance work management systems reveals that the implementation of 4IR technologies intensifies asset performance with an overall gain of 27.46%, yielding the best maintenance index. This study further reveals that the benefits of 4IR technologies positively impact equipment defect predictability before failure, thereby yielding a predictive maintenance strategy.

Research limitations/implications

The study focused on maintenance work management system without the consideration of other subsystems such as cost of maintenance, production dynamics, and supply chain management.

Practical implications

The maintenance real-world quantitative data is retrieved from two maintenance departments from company A, for a period of 24 months, representing years 2017 and 2018. The maintenance quantitative data retrieved represent six various types of equipment used at underground Mines. The maintenance management qualitative data (Organizational documents) in maintenance management are retrieved from company A and company B. Company A is a global mining industry, and company B is a global manufacturing industry. The reliability of the data used in the model validation have practical implications on how maintenance work management system behaves with the benefit of 4IR technologies' implementation.

Social implications

This research study yields an overall benefit in asset management, thereby intensifying asset performance. The expected learnings are intended to benefit future research in the physical asset management field of study and most important to the industry practitioners in physical asset management.

Originality/value

This paper provides for a model in which maintenance work and its dynamics is systematically managed. Uncontrollable corrective maintenance work increases the complexity of the overall maintenance work management. The use of a system dynamic model and simulation incorporating 4IR technologies adds value on the maintenance work management effectiveness.

Details

Journal of Quality in Maintenance Engineering, vol. 29 no. 5
Type: Research Article
ISSN: 1355-2511

Keywords

Open Access
Article
Publication date: 3 August 2020

Djordje Cica, Branislav Sredanovic, Sasa Tesic and Davorin Kramar

Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with…

2122

Abstract

Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with cutting fluids, the machining industries are continuously developing technologies and systems for cooling/lubricating of the cutting zone while maintaining machining efficiency. In the present study, three regression based machine learning techniques, namely, polynomial regression (PR), support vector regression (SVR) and Gaussian process regression (GPR) were developed to predict machining force, cutting power and cutting pressure in the turning of AISI 1045. In the development of predictive models, machining parameters of cutting speed, depth of cut and feed rate were considered as control factors. Since cooling/lubricating techniques significantly affects the machining performance, prediction model development of quality characteristics was performed under minimum quantity lubrication (MQL) and high-pressure coolant (HPC) cutting conditions. The prediction accuracy of developed models was evaluated by statistical error analyzing methods. Results of regressions based machine learning techniques were also compared with probably one of the most frequently used machine learning method, namely artificial neural networks (ANN). Finally, a metaheuristic approach based on a neural network algorithm was utilized to perform an efficient multi-objective optimization of process parameters for both cutting environment.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 23 October 2023

Jan Svanberg, Tohid Ardeshiri, Isak Samsten, Peter Öhman, Presha E. Neidermeyer, Tarek Rana, Frank Maisano and Mats Danielson

The purpose of this study is to develop a method to assess social performance. Traditionally, environment, social and governance (ESG) rating providers use subjectively weighted…

Abstract

Purpose

The purpose of this study is to develop a method to assess social performance. Traditionally, environment, social and governance (ESG) rating providers use subjectively weighted arithmetic averages to combine a set of social performance (SP) indicators into one single rating. To overcome this problem, this study investigates the preconditions for a new methodology for rating the SP component of the ESG by applying machine learning (ML) and artificial intelligence (AI) anchored to social controversies.

Design/methodology/approach

This study proposes the use of a data-driven rating methodology that derives the relative importance of SP features from their contribution to the prediction of social controversies. The authors use the proposed methodology to solve the weighting problem with overall ESG ratings and further investigate whether prediction is possible.

Findings

The authors find that ML models are able to predict controversies with high predictive performance and validity. The findings indicate that the weighting problem with the ESG ratings can be addressed with a data-driven approach. The decisive prerequisite, however, for the proposed rating methodology is that social controversies are predicted by a broad set of SP indicators. The results also suggest that predictively valid ratings can be developed with this ML-based AI method.

Practical implications

This study offers practical solutions to ESG rating problems that have implications for investors, ESG raters and socially responsible investments.

Social implications

The proposed ML-based AI method can help to achieve better ESG ratings, which will in turn help to improve SP, which has implications for organizations and societies through sustainable development.

Originality/value

To the best of the authors’ knowledge, this research is one of the first studies that offers a unique method to address the ESG rating problem and improve sustainability by focusing on SP indicators.

Details

Sustainability Accounting, Management and Policy Journal, vol. 14 no. 7
Type: Research Article
ISSN: 2040-8021

Keywords

Open Access
Article
Publication date: 3 January 2024

Eloy Gil-Cordero, Pablo Ledesma-Chaves, Rocío Arteaga Sánchez and Ari Melo Mariano

The aim of this study is to examine the behavioral intention (BI) to adopt the Coinbase Wallet by Spanish users.

10704

Abstract

Purpose

The aim of this study is to examine the behavioral intention (BI) to adopt the Coinbase Wallet by Spanish users.

Design/methodology/approach

A survey was administered to individuals residing in Spain between March and April 2021. There were 301 questionnaires analyzed. This research applies a new predictive model based on technology acceptance model (TAM) 2, the unified theory of acceptance and use of technology (UTAUT) model, the theory of perceived risk and the commitment trust theory. A mixed partial least squares structural equation modeling (PLS-SEM)/fuzzy-set qualitative comparative analysis (fsQCA) methodology was employed for the modeling and data analysis.

Findings

The results showed that all the variables proposed have a direct and positive influence on the intention to use a Coinbase Wallet. The findings present clear directions for traders, investors and academics focused on improving their understanding of the characteristics of these markets.

Originality/value

First, this study addresses important concerns relating to the adoption of crypto-wallets during the global pandemic. Second, this research contributes to the existing literature by adding electronic word of mouth (e-WOM), trust, web quality and perceived risk as new drivers of the intention to use the Coinbase Wallet, providing unique and innovative insights. Finally, the study offers a solid methodological contribution by integrating linear (PLS) and nonlinear (fsQCA) techniques, showing that both methodologies provide a better understanding of the problem and a more detailed awareness of the patterns of antecedent factors.

Details

International Journal of Bank Marketing, vol. 42 no. 3
Type: Research Article
ISSN: 0265-2323

Keywords

Open Access
Article
Publication date: 31 May 2023

Xiaojie Xu and Yun Zhang

For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction…

Abstract

Purpose

For policymakers and participants of financial markets, predictions of trading volumes of financial indices are important issues. This study aims to address such a prediction problem based on the CSI300 nearby futures by using high-frequency data recorded each minute from the launch date of the futures to roughly two years after constituent stocks of the futures all becoming shortable, a time period witnessing significantly increased trading activities.

Design/methodology/approach

In order to answer questions as follows, this study adopts the neural network for modeling the irregular trading volume series of the CSI300 nearby futures: are the research able to utilize the lags of the trading volume series to make predictions; if this is the case, how far can the predictions go and how accurate can the predictions be; can this research use predictive information from trading volumes of the CSI300 spot and first distant futures for improving prediction accuracy and what is the corresponding magnitude; how sophisticated is the model; and how robust are its predictions?

Findings

The results of this study show that a simple neural network model could be constructed with 10 hidden neurons to robustly predict the trading volume of the CSI300 nearby futures using 1–20 min ahead trading volume data. The model leads to the root mean square error of about 955 contracts. Utilizing additional predictive information from trading volumes of the CSI300 spot and first distant futures could further benefit prediction accuracy and the magnitude of improvements is about 1–2%. This benefit is particularly significant when the trading volume of the CSI300 nearby futures is close to be zero. Another benefit, at the cost of the model becoming slightly more sophisticated with more hidden neurons, is that predictions could be generated through 1–30 min ahead trading volume data.

Originality/value

The results of this study could be used for multiple purposes, including designing financial index trading systems and platforms, monitoring systematic financial risks and building financial index price forecasting.

Details

Asian Journal of Economics and Banking, vol. 8 no. 1
Type: Research Article
ISSN: 2615-9821

Keywords

Open Access
Article
Publication date: 3 May 2024

Mohamed Ali Trabelsi

This paper reviews recent research on the expected economic effects of developing artificial intelligence (AI) through a survey of the latest publications, in particular papers…

Abstract

Purpose

This paper reviews recent research on the expected economic effects of developing artificial intelligence (AI) through a survey of the latest publications, in particular papers and reports issued by academics, consulting companies and think tanks.

Design/methodology/approach

Our paper represents a point of view on AI and its impact on the global economy. It represents a descriptive analysis of the AI phenomenon.

Findings

AI represents a driver of productivity and economic growth. It can increase efficiency and significantly improve the decision-making process by analyzing large amounts of data, yet at the same time it creates equally serious risks of job market polarization, rising inequality, structural unemployment and the emergence of new undesirable industrial structures.

Practical implications

This paper presents itself as a building block for further research by introducing the two main factors in the production function (Cobb-Douglas): labor and capital. Indeed, Zeira (1998) and Aghion, Jones and Jones (2017) suggested that AI can stimulate growth by replacing labor, which is a limited resource, with capital, an unlimited resource, both for the production of goods, services and ideas.

Originality/value

Our study contributes to the previous literature and presents a descriptive analysis of the impact of AI on technological development, economic growth and employment.

Details

Journal of Electronic Business & Digital Economics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-4214

Keywords

Open Access
Article
Publication date: 22 November 2023

En-Ze Rui, Guang-Zhi Zeng, Yi-Qing Ni, Zheng-Wei Chen and Shuo Hao

Current methods for flow field reconstruction mainly rely on data-driven algorithms which require an immense amount of experimental or field-measured data. Physics-informed neural…

Abstract

Purpose

Current methods for flow field reconstruction mainly rely on data-driven algorithms which require an immense amount of experimental or field-measured data. Physics-informed neural network (PINN), which was proposed to encode physical laws into neural networks, is a less data-demanding approach for flow field reconstruction. However, when the fluid physics is complex, it is tricky to obtain accurate solutions under the PINN framework. This study aims to propose a physics-based data-driven approach for time-averaged flow field reconstruction which can overcome the hurdles of the above methods.

Design/methodology/approach

A multifidelity strategy leveraging PINN and a nonlinear information fusion (NIF) algorithm is proposed. Plentiful low-fidelity data are generated from the predictions of a PINN which is constructed purely using Reynold-averaged Navier–Stokes equations, while sparse high-fidelity data are obtained by field or experimental measurements. The NIF algorithm is performed to elicit a multifidelity model, which blends the nonlinear cross-correlation information between low- and high-fidelity data.

Findings

Two experimental cases are used to verify the capability and efficacy of the proposed strategy through comparison with other widely used strategies. It is revealed that the missing flow information within the whole computational domain can be favorably recovered by the proposed multifidelity strategy with use of sparse measurement/experimental data. The elicited multifidelity model inherits the underlying physics inherent in low-fidelity PINN predictions and rectifies the low-fidelity predictions over the whole computational domain. The proposed strategy is much superior to other contrastive strategies in terms of the accuracy of reconstruction.

Originality/value

In this study, a physics-informed data-driven strategy for time-averaged flow field reconstruction is proposed which extends the applicability of the PINN framework. In addition, embedding physical laws when training the multifidelity model leads to less data demand for model development compared to purely data-driven methods for flow field reconstruction.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 1
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 29 May 2023

Christopher Amaral, Ceren Kolsarici and Mikhail Nediak

The purpose of this study is to understand the profit implications of analytics-driven centralized discriminatory pricing at the headquarter level compared with sales force price…

1508

Abstract

Purpose

The purpose of this study is to understand the profit implications of analytics-driven centralized discriminatory pricing at the headquarter level compared with sales force price delegation in the purchase of an aftermarket good through an indirect retail channel with symmetric information.

Design/methodology/approach

Using individual-level loan application and approval data from a North American financial institution and segment-level customer risk as the price discrimination criterion for the firm, the authors develop a three-stage model that accounts for the salesperson’s price decision within the limits of the latitude provided by the firm; the firm’s decision to approve or not approve a sales application; and the customer’s decision to accept or reject a sales offer conditional on the firm’s approval. Next, the authors compare the profitability of this sales force price delegation model to that of a segment-level centralized pricing model where agent incentives and consumer prices are simultaneously optimized using a quasi-Newton nonlinear optimization algorithm (i.e. Broyden–Fletcher–Goldfarb–Shanno algorithm).

Findings

The results suggest that implementation of analytics-driven centralized discriminatory pricing and optimal sales force incentives leads to double-digit lifts in firm profits. Moreover, the authors find that the high-risk customer segment is less price-sensitive and firms, upon leveraging this segment’s willingness to pay, not only improve their bottom-line but also allow these marginalized customers with traditionally low approval rates access to loans. This points out the important customer welfare implications of the findings.

Originality/value

Substantively, to the best of the authors’ knowledge, this paper is the first to empirically investigate the profitability of analytics-driven segment-level (i.e. discriminatory) centralized pricing compared with sales force price delegation in indirect retail channels (i.e. where agents are external to the firm and have access to competitor products), taking into account the decisions of the three key stakeholders of the process, namely, the consumer, the salesperson and the firm and simultaneously optimizing sales commission and centralized consumer price.

Details

European Journal of Marketing, vol. 57 no. 13
Type: Research Article
ISSN: 0309-0566

Keywords

Open Access
Article
Publication date: 15 March 2024

Mohammadreza Tavakoli Baghdadabad

We propose a risk factor for idiosyncratic entropy and explore the relationship between this factor and expected stock returns.

Abstract

Purpose

We propose a risk factor for idiosyncratic entropy and explore the relationship between this factor and expected stock returns.

Design/methodology/approach

We estimate a cross-sectional model of expected entropy that uses several common risk factors to predict idiosyncratic entropy.

Findings

We find a negative relationship between expected idiosyncratic entropy and returns. Specifically, the Carhart alpha of a low expected entropy portfolio exceeds the alpha of a high expected entropy portfolio by −2.37% per month. We also find a negative and significant price of expected idiosyncratic entropy risk using the Fama-MacBeth cross-sectional regressions. Interestingly, expected entropy helps us explain the idiosyncratic volatility puzzle that stocks with high idiosyncratic volatility earn low expected returns.

Originality/value

We propose a risk factor of idiosyncratic entropy and explore the relationship between this factor and expected stock returns. Interestingly, expected entropy helps us explain the idiosyncratic volatility puzzle that stocks with high idiosyncratic volatility earn low expected returns.

Details

China Accounting and Finance Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1029-807X

Keywords

Open Access
Article
Publication date: 8 August 2023

Elisa Verna, Gianfranco Genta and Maurizio Galetto

The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality…

Abstract

Purpose

The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality performance in both assembly and disassembly operations. This topic has not been extensively investigated in previous research.

Design/methodology/approach

An extensive experimental campaign involving 84 operators was conducted to repeatedly assemble and disassemble six different products of varying complexity to construct productivity and quality learning curves. Data from the experiment were analysed using statistical methods.

Findings

The human learning factor of productivity increases superlinearly with the increasing architectural complexity of products, i.e. from centralised to distributed architectures, both in assembly and disassembly, regardless of the level of overall product complexity. On the other hand, the human learning factor of quality performance decreases superlinearly as the architectural complexity of products increases. The intrinsic characteristics of product architecture are the reasons for this difference in learning factor.

Practical implications

The results of the study suggest that considering product complexity, particularly architectural complexity, in the design and planning of manufacturing processes can optimise operator learning, productivity and quality performance, and inform decisions about improving manufacturing operations.

Originality/value

While previous research has focussed on the effects of complexity on process time and defect generation, this study is amongst the first to investigate and quantify the effects of product complexity, including architectural complexity, on operator learning using an extensive experimental campaign.

Details

Journal of Manufacturing Technology Management, vol. 34 no. 9
Type: Research Article
ISSN: 1741-038X

Keywords

Access

Only content I have access to

Year

Last 6 months (38)

Content type

1 – 10 of 38