Search results

1 – 10 of 261
Article
Publication date: 14 May 2024

Ben Hoehn, Hannah Salzberger and Sven Bienert

The study aims to assess the effectiveness of prevailing methods for quantifying physical climate risks. Its goal is to evaluate their utility in guiding financial decision-making…

Abstract

Purpose

The study aims to assess the effectiveness of prevailing methods for quantifying physical climate risks. Its goal is to evaluate their utility in guiding financial decision-making within the real estate industry. Whilst climate risk has become a pivotal consideration in transaction and regulatory compliance, the existing tools for risk quantification frequently encounter criticism for their perceived lack of transparency and comparability.

Design/methodology/approach

We utilise a sequential exploratory mixed-methods analysis to integrate qualitative aspects of underlying tool characteristics with quantitative result divergence. In our qualitative analysis, we conduct interviews with companies providing risk quantification tools. We task these providers with quantifying the physical risk of a fictive pan-European real estate portfolio. Our approach involves an in-depth comparative analysis, hypothesis tests and regression to discern patterns in the variability of the results.

Findings

We observe significant variations in the quantification of physical risk for the pan-European portfolio, indicating limited utility for decision-making. The results highlight that variability is influenced by both the location of assets and the hazard. Identified reasons for discrepancies include differences in regional databases and models, variations in downscaling and corresponding scope, disparities in the definition of scores and systematic uncertainties.

Practical implications

The study assists market participants in comprehending both the quantification process and the implications associated with using tools for financial decision-making.

Originality/value

To our knowledge, this study presents the initial robust empirical evidence of variability in quantification outputs for physical risk within the real estate industry, coupled with an exploration of their underlying reasons.

Details

Journal of Property Investment & Finance, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-578X

Keywords

Article
Publication date: 21 December 2023

Libiao Bai, Xuyang Zhao, ShuYun Kang, Yiming Ma and BingBing Zhang

Research and development (R&D) projects are often pursued through a project portfolio (PP). R&D PPs involve many stakeholders, and without proactive management, their interactions…

Abstract

Purpose

Research and development (R&D) projects are often pursued through a project portfolio (PP). R&D PPs involve many stakeholders, and without proactive management, their interactions may lead to conflict risks. These conflict risks change dynamically with different stages of the PP life cycle, increasing the challenge of PP risk management. Existing conflict risk research mainly focuses on source identification but lacks risk assessment work. To better manage the stakeholder conflict risks (SCRs) of R&D PPs, this study employs the dynamic Bayesian network (DBN) to construct its dynamic assessment model.

Design/methodology/approach

This study constructs a DBN model to assess the SCRs in R&D PP. First, an indicator system of SCRs is constructed from the life cycle perspective. Then, the risk relationships within each R&D PPs life cycle stage are identified via interpretative structural modeling (ISM). The prior and conditional probabilities of risks are obtained by expert judgment and Monte Carlo simulation (MCS). Finally, crucial SCRs at each stage are identified utilizing propagation analysis, and the corresponding risk responses are proposed.

Findings

The results of the study identify the crucial risks at each stage. Also, for the crucial risks, this study suggests appropriate risk response strategies to help managers better perform risk response activities.

Originality/value

This study dynamically assesses the stakeholder conflict risks in R&D PPs from a life-cycle perspective, extending the stakeholder risk management research. Meanwhile, the crucial risks are identified at each stage accordingly, providing managerial insights for R&D PPs.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 5 July 2024

Aditya Thangjam, Sanjita Jaipuria and Pradeep Kumar Dadabada

The purpose of this study is to propose a systematic model selection procedure for long-term load forecasting (LTLF) for ex-ante and ex-post cases considering uncertainty in…

Abstract

Purpose

The purpose of this study is to propose a systematic model selection procedure for long-term load forecasting (LTLF) for ex-ante and ex-post cases considering uncertainty in exogenous predictors.

Design/methodology/approach

The different variants of regression models, namely, Polynomial Regression (PR), Generalised Additive Model (GAM), Quantile Polynomial Regression (QPR) and Quantile Spline Regression (QSR), incorporating uncertainty in exogenous predictors like population, Real Gross State Product (RGSP) and Real Per Capita Income (RPCI), temperature and indicators of breakpoints and calendar effects, are considered for LTLF. Initially, the Backward Feature Elimination procedure is used to identify the optimal set of predictors for LTLF. Then, the consistency in model accuracies is evaluated using point and probabilistic forecast error metrics for ex-ante and ex-post cases.

Findings

From this study, it is found PR model outperformed in ex-ante condition, while QPR model outperformed in ex-post condition. Further, QPR model performed consistently across validation and testing periods. Overall, QPR model excelled in capturing uncertainty in exogenous predictors, thereby reducing over-forecast error and risk of overinvestment.

Research limitations/implications

These findings can help utilities to align model selection strategies with their risk tolerance.

Originality/value

To propose the systematic model selection procedure in this study, the consistent performance of PR, GAM, QPR and QSR models are evaluated using point forecast accuracy metrics Mean Absolute Percentage Error, Root Mean Squared Error and probabilistic forecast accuracy metric Pinball Score for ex-ante and ex-post cases considering uncertainty in the considered exogenous predictors such as RGSP, RPCI, population and temperature.

Details

Journal of Modelling in Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 17 September 2024

Sinan Obaidat, Mohammad Firas Tamimi, Ahmad Mumani and Basem Alkhaleel

This paper aims to present a predictive model approach to estimate the tensile behavior of polylactic acid (PLA) under uncertainty using the fused deposition modeling (FDM) and…

Abstract

Purpose

This paper aims to present a predictive model approach to estimate the tensile behavior of polylactic acid (PLA) under uncertainty using the fused deposition modeling (FDM) and American Society for Testing and Materials (ASTM) D638’s Types I and II test standards.

Design/methodology/approach

The prediction approach combines artificial neural network (ANN) and finite element analysis (FEA), Monte Carlo simulation (MCS) and experimental testing for estimating tensile behavior for FDM considering uncertainties of input parameters. FEA with variance-based sensitivity analysis is used to quantify the impacts of uncertain variables, resulting in determining the significant variables for use in the ANN model. ANN surrogates FEA models of ASTM D638’s Types I and II standards to assess their prediction capabilities using MCS. The developed model is applied for testing the tensile behavior of PLA given probabilistic variables of geometry and material properties.

Findings

The results demonstrate that Type I is more appropriate than Type II for predicting tensile behavior under uncertainty. With a training accuracy of 98% and proven presence of overfitting, the tensile behavior can be successfully modeled using predictive methods that consider the probabilistic nature of input parameters. The proposed approach is generic and can be used for other testing standards, input parameters, materials and response variables.

Originality/value

Using the proposed predictive approach, to the best of the authors’ knowledge, the tensile behavior of PLA is predicted for the first time considering uncertainties of input parameters. Also, incorporating global sensitivity analysis for determining the most contributing parameters influencing the tensile behavior has not yet been studied for FDM. The use of only significant variables for FEA, ANN and MCS minimizes the computational effort, allowing to simulate more runs with reduced number of variables within acceptable time.

Details

Rapid Prototyping Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 2 August 2024

Faris Elghaish, Sandra Matarneh, M. Reza Hosseini, Algan Tezel, Abdul-Majeed Mahamadu and Firouzeh Taghikhah

Predictive digital twin technology, which amalgamates digital twins (DT), the internet of Things (IoT) and artificial intelligence (AI) for data collection, simulation and…

Abstract

Purpose

Predictive digital twin technology, which amalgamates digital twins (DT), the internet of Things (IoT) and artificial intelligence (AI) for data collection, simulation and predictive purposes, has demonstrated its effectiveness across a wide array of industries. Nonetheless, there is a conspicuous lack of comprehensive research in the built environment domain. This study endeavours to fill this void by exploring and analysing the capabilities of individual technologies to better understand and develop successful integration use cases.

Design/methodology/approach

This study uses a mixed literature review approach, which involves using bibliometric techniques as well as thematic and critical assessments of 137 relevant academic papers. Three separate lists were created using the Scopus database, covering AI and IoT, as well as DT, since AI and IoT are crucial in creating predictive DT. Clear criteria were applied to create the three lists, including limiting the results to only Q1 journals and English publications from 2019 to 2023, in order to include the most recent and highest quality publications. The collected data for the three technologies was analysed using the bibliometric package in R Studio.

Findings

Findings reveal asymmetric attention to various components of the predictive digital twin’s system. There is a relatively greater body of research on IoT and DT, representing 43 and 47%, respectively. In contrast, direct research on the use of AI for net-zero solutions constitutes only 10%. Similarly, the findings underscore the necessity of integrating these three technologies to develop predictive digital twin solutions for carbon emission prediction.

Practical implications

The results indicate that there is a clear need for more case studies investigating the use of large-scale IoT networks to collect carbon data from buildings and construction sites. Furthermore, the development of advanced and precise AI models is imperative for predicting the production of renewable energy sources and the demand for housing.

Originality/value

This paper makes a significant contribution to the field by providing a strong theoretical foundation. It also serves as a catalyst for future research within this domain. For practitioners and policymakers, this paper offers a reliable point of reference.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Open Access
Article
Publication date: 18 July 2024

Christine Dagmar Malin, Jürgen Fleiß, Isabella Seeber, Bettina Kubicek, Cordula Kupfer and Stefan Thalmann

How to embed artificial intelligence (AI) in human resource management (HRM) is one of the core challenges of digital HRM. Despite regulations demanding humans in the loop to…

Abstract

Purpose

How to embed artificial intelligence (AI) in human resource management (HRM) is one of the core challenges of digital HRM. Despite regulations demanding humans in the loop to ensure human oversight of AI-based decisions, it is still unknown how much decision-makers rely on information provided by AI and how this affects (personnel) selection quality.

Design/methodology/approach

This paper presents an experimental study using vignettes of dashboard prototypes to investigate the effect of AI on decision-makers’ overreliance in personnel selection, particularly the impact of decision-makers’ information search behavior on selection quality.

Findings

Our study revealed decision-makers’ tendency towards status quo bias when using an AI-based ranking system, meaning that they paid more attention to applicants that were ranked higher than those ranked lower. We identified three information search strategies that have different effects on selection quality: (1) homogeneous search coverage, (2) heterogeneous search coverage, and (3) no information search. The more applicants were searched equally often (i.e. homogeneous) as when certain applicants received more search views than others (i.e. heterogeneous) the higher the search intensity was, resulting in higher selection quality. No information search is characterized by low search intensity and low selection quality. Priming decision-makers towards carrying responsibility for their decisions or explaining potential AI shortcomings had no moderating effect on the relationship between search coverage and selection quality.

Originality/value

Our study highlights the presence of status quo bias in personnel selection given AI-based applicant rankings, emphasizing the danger that decision-makers over-rely on AI-based recommendations.

Details

Business Process Management Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 2 May 2024

Xin Fan, Yongshou Liu, Zongyi Gu and Qin Yao

Ensuring the safety of structures is important. However, when a structure possesses both an implicit performance function and an extremely small failure probability, traditional…

Abstract

Purpose

Ensuring the safety of structures is important. However, when a structure possesses both an implicit performance function and an extremely small failure probability, traditional methods struggle to conduct a reliability analysis. Therefore, this paper proposes a reliability analysis method aimed at enhancing the efficiency of rare event analysis, using the widely recognized Relevant Vector Machine (RVM).

Design/methodology/approach

Drawing from the principles of importance sampling (IS), this paper employs Harris Hawks Optimization (HHO) to ascertain the optimal design point. This approach not only guarantees precision but also facilitates the RVM in approximating the limit state surface. When the U learning function, designed for Kriging, is applied to RVM, it results in sample clustering in the design of experiment (DoE). Therefore, this paper proposes a FU learning function, which is more suitable for RVM.

Findings

Three numerical examples and two engineering problem demonstrate the effectiveness of the proposed method.

Originality/value

By employing the HHO algorithm, this paper innovatively applies RVM in IS reliability analysis, proposing a novel method termed RVM-HIS. The RVM-HIS demonstrates exceptional computational efficiency, making it eminently suitable for rare events reliability analysis with implicit performance function. Moreover, the computational efficiency of RVM-HIS has been significantly enhanced through the improvement of the U learning function.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 31 July 2024

Shenglei Wu, Jianhui Liu, Yazhou Wang, Jumei Lu and Ziyang Zhang

Sufficient sample data are the necessary condition to ensure high reliability; however, there are relatively poor fatigue test data in the engineering, which affects fatigue…

Abstract

Purpose

Sufficient sample data are the necessary condition to ensure high reliability; however, there are relatively poor fatigue test data in the engineering, which affects fatigue life's prediction accuracy. Based on this, this research intends to analyze the fatigue data with small sample characteristics, and then realize the life assessment under different stress levels.

Design/methodology/approach

Firstly, the Bootstrap method and the principle of fatigue life percentile consistency are used to realize sample aggregation and information fusion. Secondly, the classical outlier detection algorithm (DBSCAN) is used to check the sample data. Then, based on the stress field intensity method, the influence of the non-uniform stress field near the notch root on the fatigue life is analyzed, and the calculation methods of the fatigue damage zone radius and the weighting function are revised. Finally, combined with Weibull distribution, a framework for assessing multiaxial low-cycle fatigue life has been developed.

Findings

The experimental data of Q355(D) material verified the model and compared it with the Yao’s stress field intensity method. The results show that the predictions of the model put forward in this research are all located within the double dispersion zone, with better prediction accuracies than the Yao’s stress field intensity method.

Originality/value

Aiming at the fatigue test data with small sample characteristics, this research has presented a new method of notch fatigue analysis based on the stress field intensity method, which is combined with the Weibull distribution to construct a low-cycle fatigue life analysis framework, to promote the development of multiaxial fatigue from experimental studies to practical engineering applications.

Details

International Journal of Structural Integrity, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 26 August 2024

Mohammadsadegh Pahlavanzadeh, Sebastian Rulik, Włodzimierz Wróblewski and Krzysztof Rusin

The performance of a bladeless Tesla turbine is closely tied to momentum diffusion, kinetic energy transfer and wall shear stress generation on its rotating disks. The surface…

Abstract

Purpose

The performance of a bladeless Tesla turbine is closely tied to momentum diffusion, kinetic energy transfer and wall shear stress generation on its rotating disks. The surface roughness adds complexity of flow analysis in such a domain. This paper aims to assess the effect of roughness on flow structures and the application of roughness models in flow cross sections with submillimeter height, including both stationary and rotating walls.

Design/methodology/approach

This research starts with the examination of flow over a rough flat plate, and then proceeds to study flow within minichannels, evaluating the effect of roughness on flow characteristics. An in-house test stand validates the numerical solutions of minichannel. Finally, flow through the minichannel with corotating walls was analyzed. The k-ω SST turbulent model and Aupoix's roughness method are used for numerical simulations.

Findings

The findings emphasize the necessity of considering the constricted dimensions of the flow cross section, thereby improving the alignment of derived results with theoretical estimations. Moreover, this study explores the effects of roughness on flow characteristics within the minichannel with stationary and rotating walls, offering valuable insights into this intricate phenomenon, and depicts the appropriate performance of chosen roughness model in studied cases.

Originality/value

The originality of this investigation is the assessment and validation of flow characteristics inside minichannel with stationary and corotating walls when the roughness is implemented. This phenomenon, along with the effect of roughness on the transportation of kinetic energy to the rough surface of a minichannel in an in-house test setup, is assessed.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 20 August 2024

Seema Pahwa, Amandeep Kaur, Poonam Dhiman and Robertas Damaševičius

The study aims to enhance the detection and classification of conjunctival eye diseases' severity through the development of ConjunctiveNet, an innovative deep learning framework…

Abstract

Purpose

The study aims to enhance the detection and classification of conjunctival eye diseases' severity through the development of ConjunctiveNet, an innovative deep learning framework. This model incorporates advanced preprocessing techniques and utilizes a modified Otsu’s method for improved image segmentation, aiming to improve diagnostic accuracy and efficiency in healthcare settings.

Design/methodology/approach

ConjunctiveNet employs a convolutional neural network (CNN) enhanced through transfer learning. The methodology integrates rescaling, normalization, Gaussian blur filtering and contrast-limited adaptive histogram equalization (CLAHE) for preprocessing. The segmentation employs a novel modified Otsu’s method. The framework’s effectiveness is compared against five pretrained CNN architectures including AlexNet, ResNet-50, ResNet-152, VGG-19 and DenseNet-201.

Findings

The study finds that ConjunctiveNet significantly outperforms existing models in accuracy for detecting various severity stages of conjunctival eye conditions. The model demonstrated superior performance in classifying four distinct severity stages – initial, moderate, high, severe and a healthy stage – offering a reliable tool for enhancing screening and diagnosis processes in ophthalmology.

Originality/value

ConjunctiveNet represents a significant advancement in the automated diagnosis of eye diseases, particularly conjunctivitis. Its originality lies in the integration of modified Otsu’s method for segmentation and its comprehensive preprocessing approach, which collectively enhance its diagnostic capabilities. This framework offers substantial value to the field by improving the accuracy and efficiency of conjunctival disease severity classification, thus aiding in better healthcare delivery.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 261