Search results
1 – 10 of over 1000Fangqi Hong, Pengfei Wei and Michael Beer
Bayesian cubature (BC) has emerged to be one of most competitive approach for estimating the multi-dimensional integral especially when the integrand is expensive to evaluate, and…
Abstract
Purpose
Bayesian cubature (BC) has emerged to be one of most competitive approach for estimating the multi-dimensional integral especially when the integrand is expensive to evaluate, and alternative acquisition functions, such as the Posterior Variance Contribution (PVC) function, have been developed for adaptive experiment design of the integration points. However, those sequential design strategies also prevent BC from being implemented in a parallel scheme. Therefore, this paper aims at developing a parallelized adaptive BC method to further improve the computational efficiency.
Design/methodology/approach
By theoretically examining the multimodal behavior of the PVC function, it is concluded that the multiple local maxima all have important contribution to the integration accuracy as can be selected as design points, providing a practical way for parallelization of the adaptive BC. Inspired by the above finding, four multimodal optimization algorithms, including one newly developed in this work, are then introduced for finding multiple local maxima of the PVC function in one run, and further for parallel implementation of the adaptive BC.
Findings
The superiority of the parallel schemes and the performance of the four multimodal optimization algorithms are then demonstrated and compared with the k-means clustering method by using two numerical benchmarks and two engineering examples.
Originality/value
Multimodal behavior of acquisition function for BC is comprehensively investigated. All the local maxima of the acquisition function contribute to adaptive BC accuracy. Parallelization of adaptive BC is realized with four multimodal optimization methods.
Details
Keywords
Xin Fan, Yongshou Liu, Zongyi Gu and Qin Yao
Ensuring the safety of structures is important. However, when a structure possesses both an implicit performance function and an extremely small failure probability, traditional…
Abstract
Purpose
Ensuring the safety of structures is important. However, when a structure possesses both an implicit performance function and an extremely small failure probability, traditional methods struggle to conduct a reliability analysis. Therefore, this paper proposes a reliability analysis method aimed at enhancing the efficiency of rare event analysis, using the widely recognized Relevant Vector Machine (RVM).
Design/methodology/approach
Drawing from the principles of importance sampling (IS), this paper employs Harris Hawks Optimization (HHO) to ascertain the optimal design point. This approach not only guarantees precision but also facilitates the RVM in approximating the limit state surface. When the U learning function, designed for Kriging, is applied to RVM, it results in sample clustering in the design of experiment (DoE). Therefore, this paper proposes a FU learning function, which is more suitable for RVM.
Findings
Three numerical examples and two engineering problem demonstrate the effectiveness of the proposed method.
Originality/value
By employing the HHO algorithm, this paper innovatively applies RVM in IS reliability analysis, proposing a novel method termed RVM-HIS. The RVM-HIS demonstrates exceptional computational efficiency, making it eminently suitable for rare events reliability analysis with implicit performance function. Moreover, the computational efficiency of RVM-HIS has been significantly enhanced through the improvement of the U learning function.
Details
Keywords
Neeraj Joshi, Sudeep R. Bapat and Raghu Nandan Sengupta
The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).
Abstract
Purpose
The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).
Design/methodology/approach
We estimate the SSR parameter R = P(X > Y) of the IPD under the minimum risk and bounded risk point estimation problems, where X and Y are strength and stress variables, respectively. The total loss function considered is a combination of estimation error (squared error) and cost, utilizing which we minimize the associated risk in order to estimate the reliability parameter. As no fixed-sample technique can be used to solve the proposed point estimation problems, we propose some “cost and time efficient” adaptive sampling techniques (two-stage and purely sequential sampling methods) to tackle them.
Findings
We state important results based on the proposed sampling methodologies. These include estimations of the expected sample size, standard deviation (SD) and mean square error (MSE) of the terminal estimator of reliability parameters. The theoretical values of reliability parameters and the associated sample size and risk functions are well supported by exhaustive simulation analyses. The applicability of our suggested methodology is further corroborated by a real dataset based on insurance claims.
Originality/value
This study will be useful for scenarios where various logistical concerns are involved in the reliability analysis. The methodologies proposed in this study can reduce the number of sampling operations substantially and save time and cost to a great extent.
Details
Keywords
Shabir Ahmad Bhat, Makhmoor Bashir and Hafsah Jan
The purpose of this paper is to develop and test an integrated model to examine the relationship between work engagement and three facets of perceived job performance (PJP). The…
Abstract
Purpose
The purpose of this paper is to develop and test an integrated model to examine the relationship between work engagement and three facets of perceived job performance (PJP). The authors argue that work engagement might not optimally improve PJP unless it is channelized through information and communication technology orientation.
Design/methodology/approach
Data for the present research were collected from higher educational institutes in the northern region of India by using a convenient sampling technique. Results of structural equation modeling (SEM) through AMOS 20 revealed that work engagement facilitates all three facets i.e. task performance, contextual performance and adaptive performance of teaching professionals. Furthermore, SEM results established the partial mediating effect of information and communication technology orientation between work engagement, task performance, contextual performance and adaptive performance.
Findings
Findings from present research contribute theoretically as well as practically to job performance and work engagement literature by giving insights to administrators and practitioners on how to improve the overall job performance of teaching professionals by enhancing their engagement and addressing their need for digital know-how.
Originality/value
To the best of the authors’ knowledge, this study is one of the first to study the impact of work engagement and information and communication technology on the three facets of PJP using a diverse sample of 1030 teachers from universities in North India.
Details
Keywords
Ahmad Fadhly Arham, Nor Sabrena Norizan, Zulkefli Muhamad Hanapiyah, Maz Izuan Mazalan and Heri Yanto
The purpose of this study is to establish the relationship between digital leadership and academic performance. It models the digitalization process, outlining why and how digital…
Abstract
Purpose
The purpose of this study is to establish the relationship between digital leadership and academic performance. It models the digitalization process, outlining why and how digital leadership is important for better academic performance. At the same time, this study examines the role of digital culture as a moderating variable in the direct relationship between main variables of the study. The study aims to expand the domain of academic performance at the university by including a much recent leadership-related aspect and organizational context of the digital culture.
Design/methodology/approach
The study opted for a descriptive study, using the survey instruments to collect the data. The sample population consisted of students currently enrolled at the Faculty of Business and Management, Universiti Teknologi MARA, Melaka, Malaysia. Based on the convenience sampling, 383 samples were drawn from the sample population. All items were adopted from previous literature, and expert feedback was obtained to examine the validity of the instruments. The data were analysed using SPSS and SmartPLS version 3.0.
Findings
This study provides empirical insights about how digital leadership is important for academic performance for the new millennials. Also, digital culture is found to provide significant moderation effect into the relationship. It suggests that universities must promote digitalization culture and embed the use of technology and digitalization into teaching and learning to cultivate a more effective learning process among university students. This is important as elements of digital leadership, including adaptive role, attitude, digital competency, digital skill and inspirational role, are found to significantly contribute to academic performance.
Research limitations/implications
This study only focuses on samples taken from one of the faculties in one campus, thus limiting its scope. Future research is encouraged to replicate the same study setting to include larger sample size from different faculties, or perhaps from different universities. These propositions could help to better generalize the research findings on the practice of digital leadership on academic performance in the country. However, this study established a digital leadership model that can be applied to undergraduate students at the universities. Also, the inclusion of digital culture can strengthen the learning process.
Practical implications
This study includes implications for the development of digital leadership attributes and promoting digital culture within the university students and environment for engaging in a better academic performance. Digital leadership is found to be an important criterion of academic performance in this digital age society, and cultivating digital culture enhances students’ academic performance. These findings shall prompt the university to actively engage in fostering digitalization culture within the university. Also, the top management of the university should inform the students to be adaptive and cultivate the attributes of digital leaders, as their readiness to cope with the technological change has significant positive impact on their academic performance.
Social implications
It is important to ensure that the future graduates that are being produced are ready to take on more challenges as digital leaders in the digital society. This might accelerate the country’s initiatives and efforts towards becoming a developed nation. Thus, investing in oneself to become digitally literate and competent might not only influence their academic performance, but they will also be equipped to fulfil one of the expectations of future employers of potential graduates, which is possessing digital leadership.
Originality/value
Digitalization is not only about the technology. It is about the people too. As the study on digital leadership is still in its infant stage, this study is unique as it is among the earliest to establish digital leadership constructs within the context of Malaysia. It informs the university that digital leadership provides significant contribution to academic performance. Thus, the university is encouraged to nurture digitalization, not only in the teaching and learning but also with the people within the university environment. Determining the right programs and plans for the curricular will help students to develop digital leadership attributes more effectively. Finally, improving digitalization among its students and culture is important, as these elements provide significant effect towards academic performance.
Details
Keywords
Miaoxian Guo, Shouheng Wei, Chentong Han, Wanliang Xia, Chao Luo and Zhijian Lin
Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical…
Abstract
Purpose
Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical modeling takes a lot of effort. To predict the surface roughness of milling processing, this paper aims to construct a neural network based on deep learning and data augmentation.
Design/methodology/approach
This study proposes a method consisting of three steps. Firstly, the machine tool multisource data acquisition platform is established, which combines sensor monitoring with machine tool communication to collect processing signals. Secondly, the feature parameters are extracted to reduce the interference and improve the model generalization ability. Thirdly, for different expectations, the parameters of the deep belief network (DBN) model are optimized by the tent-SSA algorithm to achieve more accurate roughness classification and regression prediction.
Findings
The adaptive synthetic sampling (ADASYN) algorithm can improve the classification prediction accuracy of DBN from 80.67% to 94.23%. After the DBN parameters were optimized by Tent-SSA, the roughness prediction accuracy was significantly improved. For the classification model, the prediction accuracy is improved by 5.77% based on ADASYN optimization. For regression models, different objective functions can be set according to production requirements, such as root-mean-square error (RMSE) or MaxAE, and the error is reduced by more than 40% compared to the original model.
Originality/value
A roughness prediction model based on multiple monitoring signals is proposed, which reduces the dependence on the acquisition of environmental variables and enhances the model's applicability. Furthermore, with the ADASYN algorithm, the Tent-SSA intelligent optimization algorithm is introduced to optimize the hyperparameters of the DBN model and improve the optimization performance.
Details
Keywords
Faguo Liu, Qian Zhang, Tao Yan, Bin Wang, Ying Gao, Jiaqi Hou and Feiniu Yuan
Light field images (LFIs) have gained popularity as a technology to increase the field of view (FoV) of plenoptic cameras since they can capture information about light rays with…
Abstract
Purpose
Light field images (LFIs) have gained popularity as a technology to increase the field of view (FoV) of plenoptic cameras since they can capture information about light rays with a large FoV. Wide FoV causes light field (LF) data to increase rapidly, which restricts the use of LF imaging in image processing, visual analysis and user interface. Effective LFI coding methods become of paramount importance. This paper aims to eliminate more redundancy by exploring sparsity and correlation in the angular domain of LFIs, as well as mitigate the loss of perceptual quality of LFIs caused by encoding.
Design/methodology/approach
This work proposes a new efficient LF coding framework. On the coding side, a new sampling scheme and a hierarchical prediction structure are used to eliminate redundancy in the LFI's angular and spatial domains. At the decoding side, high-quality dense LF is reconstructed using a view synthesis method based on the residual channel attention network (RCAN).
Findings
In three different LF datasets, our proposed coding framework not only reduces the transmitted bit rate but also maintains a higher view quality than the current more advanced methods.
Originality/value
(1) A new sampling scheme is designed to synthesize high-quality LFIs while better ensuring LF angular domain sparsity. (2) To further eliminate redundancy in the spatial domain, new ranking schemes and hierarchical prediction structures are designed. (3) A synthetic network based on RCAN and a novel loss function is designed to mitigate the perceptual quality loss due to the coding process.
Details
Keywords
Junting Zhang, Mudaser Javaid, Shudi Liao, Myeongcheol Choi and Hann Earl Kim
The present study aimed to examine the relationship between humble leadership (HL) and employee adaptive performance by testing the mediating role of self-determination and the…
Abstract
Purpose
The present study aimed to examine the relationship between humble leadership (HL) and employee adaptive performance by testing the mediating role of self-determination and the moderating role of employee attributions of HL.
Design/methodology/approach
A three-wave, two-source design was used to collect quantitative data from 301 employees and 45 direct supervisors of mainland Chinese enterprises. Testing the hypotheses was conducted through multiple regression analysis and moderated regression analysis.
Findings
Results showed that HL was positively related to employee adaptive performance. Additionally, the relationship between HL and employee adaptive performance was mediated by self-determination. Furthermore, this positive effect of HL on self-determination was minimized among employees who attribute HL to impression management motives but is insignificant for employees who attribute HL to performance improvement motives.
Originality/value
It has been widely concerned that the traditional “top-down” leadership styles are associated with employee adaptive performance; however, the role of bottom-up leadership styles on employee adaptive performance has only been sporadically examined. The present study introduced HL, a typical bottom-up leadership style and developed a moderated mediation model to investigate the potential effect of HL on employee adaptive performance. Moreover, by confirming the mediating role of self-determination, the authors further uncover how HL facilitates employees' adaptive performance. Meanwhile, the moderating role of employee attributions of HL found in this study offers new insights into the understanding of the effectiveness of HL.
Details
Keywords
Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen
With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…
Abstract
Purpose
With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.
Design/methodology/approach
In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.
Findings
On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.
Originality/value
In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.
Details
Keywords
Qiangqiang Zhai, Zhao Liu, Zhouzhou Song and Ping Zhu
Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to…
Abstract
Purpose
Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to problems with high-dimensional input variables, it may be difficult to obtain a model with high accuracy and efficiency due to the curse of dimensionality. To meet this challenge, an improved high-dimensional Kriging modeling method based on maximal information coefficient (MIC) is developed in this work.
Design/methodology/approach
The hyperparameter domain is first derived and the dataset of hyperparameter and likelihood function is collected by Latin Hypercube Sampling. MIC values are innovatively calculated from the dataset and used as prior knowledge for optimizing hyperparameters. Then, an auxiliary parameter is introduced to establish the relationship between MIC values and hyperparameters. Next, the hyperparameters are obtained by transforming the optimized auxiliary parameter. Finally, to further improve the modeling accuracy, a novel local optimization step is performed to discover more suitable hyperparameters.
Findings
The proposed method is then applied to five representative mathematical functions with dimensions ranging from 20 to 100 and an engineering case with 30 design variables.
Originality/value
The results show that the proposed high-dimensional Kriging modeling method can obtain more accurate results than the other three methods, and it has an acceptable modeling efficiency. Moreover, the proposed method is also suitable for high-dimensional problems with limited sample points.
Details