Search results

1 – 10 of over 1000
Article
Publication date: 12 March 2024

Hui Zhao, Simeng Wang and Chen Lu

With the continuous development of the wind power industry, wind power plant (WPP) has become the focus of resource development within the industry. Site selection, as the initial…

Abstract

Purpose

With the continuous development of the wind power industry, wind power plant (WPP) has become the focus of resource development within the industry. Site selection, as the initial stage of WPP development, is directly related to the feasibility of construction and the future revenue of WPP. Therefore, the purpose of this paper is to study the siting of WPP and establish a framework for siting decision-making.

Design/methodology/approach

Firstly, a site selection evaluation index system is constructed from four aspects of economy, geography, environment and society using the literature review method and the Delphi method, and the weights of each index are comprehensively determined by combining the Decision-making Trial and Evaluation Laboratory (DEMATEL) and the entropy weight method (EW). Then, prospect theory and the multi-criteria compromise solution ranking method (VIKOR) are introduced to rank the potential options and determine the best site.

Findings

China is used as a case study, and the robustness and reliability of the methodology are demonstrated through sensitivity analysis, comparative analysis and ablation experiment analysis. This paper aims to provide a useful reference for WPP siting research.

Originality/value

In this paper, DEMATEL and EW are used to determine the weights of indicators, which overcome the disadvantage of single assignment. Prospect theory and VIKOR are combined to construct a decision model, which also considers the attitude of the decision-maker and the compromise solution of the decision result. For the first time, this framework is applied to WPP siting research.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 4 April 2024

Dong Li, Yu Zhou, Zhan-Wei Cao, Xin Chen and Jia-Peng Dai

This paper aims to establish a lattice Boltzmann (LB) method for solid-liquid phase transition (SLPT) from the pore scale to the representative elementary volume (REV) scale. By…

Abstract

Purpose

This paper aims to establish a lattice Boltzmann (LB) method for solid-liquid phase transition (SLPT) from the pore scale to the representative elementary volume (REV) scale. By applying this method, detailed information about heat transfer and phase change processes within the pores can be obtained, while also enabling the calculation of larger-scale SLPT problems, such as shell-and-tube phase change heat storage systems.

Design/methodology/approach

Three-dimensional (3D) pore-scale enthalpy-based LB model is developed. The computational input parameters at the REV scale are derived from calculations at the pore scale, ensuring consistency between the two scales. The approaches to reconstruct the 3D porous structure and determine the REV of metal foam were discussed. The implementation of conjugate heat transfer between the solid matrix and the solid−liquid phase change material (SLPCM) for the proposed model is developed. A simple REV-scale LB model under the local thermal nonequilibrium condition is presented. The method of bridging the gap between the pore-scale and REV-scale enthalpy-based LB models by the REV is given.

Findings

This coupled method facilitates detailed simulations of flow, heat transfer and phase change within pores. The approach holds promise for multiscale calculations in latent heat storage devices with porous structures. The SLPT of the heat sinks for electronic device thermal control was simulated as a case, demonstrating the efficiency of the present models in designing and optimizing SLPT devices.

Originality/value

A coupled pore-scale and REV-scale LB method as a numerical tool for investigating phase change in porous materials was developed. This innovative approach allows for the capture of details within pores while addressing computations over a large domain. The LB method for simulating SLPT from the pore scale to the REV scale was given. The proposed method addresses the conjugate heat transfer between the SLPCM and the solid matrix in the enthalpy-based LB model.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 9 April 2024

Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…

Abstract

Purpose

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.

Design/methodology/approach

In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.

Findings

On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.

Originality/value

In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 22 March 2024

Sanaz Khalaj Rahimi and Donya Rahmani

The study aims to optimize truck routes by minimizing social and economic costs. It introduces a strategy involving diverse drones and their potential for reusing at DNs based on…

22

Abstract

Purpose

The study aims to optimize truck routes by minimizing social and economic costs. It introduces a strategy involving diverse drones and their potential for reusing at DNs based on flight range. In HTDRP-DC, trucks can select and transport various drones to LDs to reduce deprivation time. This study estimates the nonlinear deprivation cost function using a linear two-piece-wise function, leading to MILP formulations. A heuristic-based Benders Decomposition approach is implemented to address medium and large instances. Valid inequalities and a heuristic method enhance convergence boundaries, ensuring an efficient solution methodology.

Design/methodology/approach

Research has yet to address critical factors in disaster logistics: minimizing the social and economic costs simultaneously and using drones in relief distribution; deprivation as a social cost measures the human suffering from a shortage of relief supplies. The proposed hybrid truck-drone routing problem minimizing deprivation cost (HTDRP-DC) involves distributing relief supplies to dispersed demand nodes with undamaged (LDs) or damaged (DNs) access roads, utilizing multiple trucks and diverse drones. A Benders Decomposition approach is enhanced by accelerating techniques.

Findings

Incorporating deprivation and economic costs results in selecting optimal routes, effectively reducing the time required to assist affected areas. Additionally, employing various drone types and their reuse in damaged nodes reduces deprivation time and associated deprivation costs. The study employs valid inequalities and the heuristic method to solve the master problem, substantially reducing computational time and iterations compared to GAMS and classical Benders Decomposition Algorithm. The proposed heuristic-based Benders Decomposition approach is applied to a disaster in Tehran, demonstrating efficient solutions for the HTDRP-DC regarding computational time and convergence rate.

Originality/value

Current research introduces an HTDRP-DC problem that addresses minimizing deprivation costs considering the vehicle’s arrival time as the deprivation time, offering a unique solution to optimize route selection in relief distribution. Furthermore, integrating heuristic methods and valid inequalities into the Benders Decomposition approach enhances its effectiveness in solving complex routing challenges in disaster scenarios.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 17 April 2023

Ashlyn Maria Mathai and Mahesh Kumar

In this paper, a mixture of exponential and Rayleigh distributions in the proportions α and 1 − α and all the parameters in the mixture distribution are estimated based on fuzzy…

Abstract

Purpose

In this paper, a mixture of exponential and Rayleigh distributions in the proportions α and 1 − α and all the parameters in the mixture distribution are estimated based on fuzzy data.

Design/methodology/approach

The methods such as maximum likelihood estimation (MLE) and method of moments (MOM) are applied for estimation. Fuzzy data of triangular fuzzy numbers and Gaussian fuzzy numbers for different sample sizes are considered to illustrate the resulting estimation and to compare these methods. In addition to this, the obtained results are compared with existing results for crisp data in the literature.

Findings

The application of fuzziness in the data will be very useful to obtain precise results in the presence of vagueness in data. Mean square errors (MSEs) of the resulting estimators are computed using crisp data and fuzzy data. On comparison, in terms of MSEs, it is observed that maximum likelihood estimators perform better than moment estimators.

Originality/value

Classical methods of obtaining estimators of unknown parameters fail to give realistic estimators since these methods assume the data collected to be crisp or exact. Normally, such case of precise data is not always feasible and realistic in practice. Most of them will be incomplete and sometimes expressed in linguistic variables. Such data can be handled by generalizing the classical inference methods using fuzzy set theory.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 2 May 2023

Dongyuan Zhao, Zhongjun Tang and Duokui He

With the intensification of market competition, there is a growing demand for weak signal identification and evolutionary analysis for enterprise foresight. For decades, many…

Abstract

Purpose

With the intensification of market competition, there is a growing demand for weak signal identification and evolutionary analysis for enterprise foresight. For decades, many scholars have conducted relevant research. However, the existing research only cuts in from a single angle and lacks a systematic and comprehensive overview. In this paper, the authors summarize the articles related to weak signal recognition and evolutionary analysis, in an attempt to make contributions to relevant research.

Design/methodology/approach

The authors develop a systematic overview framework based on the most classical three-dimensional space model of weak signals. Framework comprehensively summarizes the current research insights and knowledge from three dimensions of research field, identification methods and interpretation methods.

Findings

The research results show that it is necessary to improve the automation level in the process of weak signal recognition and analysis and transfer valuable human resources to the decision-making stage. In addition, it is necessary to coordinate multiple types of data sources, expand research subfields and optimize weak signal recognition and interpretation methods, with a view to expanding weak signal future research, making theoretical and practical contributions to enterprise foresight, and providing reference for the government to establish weak signal technology monitoring, evaluation and early warning mechanisms.

Originality/value

The authors develop a systematic overview framework based on the most classical three-dimensional space model of weak signals. It comprehensively summarizes the current research insights and knowledge from three dimensions of research field, identification methods and interpretation methods.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 19 December 2023

Jinchao Huang

Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based…

Abstract

Purpose

Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based on RGBD clothing images often suffer from high-dimensional feature representations, leading to compromised performance and efficiency.

Design/methodology/approach

To address this issue, this paper proposes a novel method called Manifold Embedded Discriminative Feature Selection (MEDFS) to select global and local features, thereby reducing the dimensionality of the feature representation and improving performance. Specifically, by combining three global features and three local features, a low-dimensional embedding is constructed to capture the correlations between features and categories. The MEDFS method designs an optimization framework utilizing manifold mapping and sparse regularization to achieve feature selection. The optimization objective is solved using an alternating iterative strategy, ensuring convergence.

Findings

Empirical studies conducted on a publicly available RGBD clothing image dataset demonstrate that the proposed MEDFS method achieves highly competitive clothing classification performance while maintaining efficiency in clothing recognition and retrieval.

Originality/value

This paper introduces a novel approach for multi-category clothing recognition and retrieval, incorporating the selection of global and local features. The proposed method holds potential for practical applications in real-world clothing scenarios.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 5 October 2022

Stratos Moschidis, Angelos Markos and Athanasios C. Thanopoulos

The purpose of this paper is to create an automatic interpretation of the results of the method of multiple correspondence analysis (MCA) for categorical variables, so that the…

2795

Abstract

Purpose

The purpose of this paper is to create an automatic interpretation of the results of the method of multiple correspondence analysis (MCA) for categorical variables, so that the nonexpert user can immediately and safely interpret the results, which concern, as the authors know, the categories of variables that strongly interact and determine the trends of the subject under investigation.

Design/methodology/approach

This study is a novel theoretical approach to interpreting the results of the MCA method. The classical interpretation of MCA results is based on three indicators: the projection (F) of the category points of the variables in factorial axes, the point contribution to axis creation (CTR) and the correlation (COR) of a point with an axis. The synthetic use of the aforementioned indicators is arduous, particularly for nonexpert users, and frequently results in misinterpretations. The current study has achieved a synthesis of the aforementioned indicators, so that the interpretation of the results is based on a new indicator, as correspondingly on an index, the well-known method principal component analysis (PCA) for continuous variables is based.

Findings

Two (2) concepts were proposed in the new theoretical approach. The interpretative axis corresponding to the classical factorial axis and the interpretative plane corresponding to the factorial plane that as it will be seen offer clear and safe interpretative results in MCA.

Research limitations/implications

It is obvious that in the development of the proposed automatic interpretation of the MCA results, the authors do not have in the interpretative axes the actual projections of the points as is the case in the original factorial axes, but this is not of interest to the simple user who is only interested in being able to distinguish the categories of variables that determine the interpretation of the most pronounced trends of the phenomenon being examined.

Practical implications

The results of this research can have positive implications for the dissemination of MCA as a method and its use as an integrated exploratory data analysis approach.

Originality/value

Interpreting the MCA results presents difficulties for the nonexpert user and sometimes lead to misinterpretations. The interpretative difficulty persists in the MCA's other interpretative proposals. The proposed method of interpreting the MCA results clearly and accurately allows for the interpretation of its results and thus contributes to the dissemination of the MCA as an integrated method of categorical data analysis and exploration.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 12 April 2024

Tongzheng Pu, Chongxing Huang, Haimo Zhang, Jingjing Yang and Ming Huang

Forecasting population movement trends is crucial for implementing effective policies to regulate labor force growth and understand demographic changes. Combining migration theory…

Abstract

Purpose

Forecasting population movement trends is crucial for implementing effective policies to regulate labor force growth and understand demographic changes. Combining migration theory expertise and neural network technology can bring a fresh perspective to international migration forecasting research.

Design/methodology/approach

This study proposes a conditional generative adversarial neural network model incorporating the migration knowledge – conditional generative adversarial network (MK-CGAN). By using the migration knowledge to design the parameters, MK-CGAN can effectively address the limited data problem, thereby enhancing the accuracy of migration forecasts.

Findings

The model was tested by forecasting migration flows between different countries and had good generalizability and validity. The results are robust as the proposed solutions can achieve lesser mean absolute error, mean squared error, root mean square error, mean absolute percentage error and R2 values, reaching 0.9855 compared to long short-term memory (LSTM), gated recurrent unit, generative adversarial network (GAN) and the traditional gravity model.

Originality/value

This study is significant because it demonstrates a highly effective technique for predicting international migration using conditional GANs. By incorporating migration knowledge into our models, we can achieve prediction accuracy, gaining valuable insights into the differences between various model characteristics. We used SHapley Additive exPlanations to enhance our understanding of these differences and provide clear and concise explanations for our model predictions. The results demonstrated the theoretical significance and practical value of the MK-CGAN model in predicting international migration.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 20 July 2023

Mu Shengdong, Liu Yunjie and Gu Jijian

By introducing Stacking algorithm to solve the underfitting problem caused by insufficient data in traditional machine learning, this paper provides a new solution to the cold…

Abstract

Purpose

By introducing Stacking algorithm to solve the underfitting problem caused by insufficient data in traditional machine learning, this paper provides a new solution to the cold start problem of entrepreneurial borrowing risk control.

Design/methodology/approach

The authors introduce semi-supervised learning and integrated learning into the field of migration learning, and innovatively propose the Stacking model migration learning, which can independently train models on entrepreneurial borrowing credit data, and then use the migration strategy itself as the learning object, and use the Stacking algorithm to combine the prediction results of the source domain model and the target domain model.

Findings

The effectiveness of the two migration learning models is evaluated with real data from an entrepreneurial borrowing. The algorithmic performance of the Stacking-based model migration learning is further improved compared to the benchmark model without migration learning techniques, with the model area under curve value rising to 0.8. Comparing the two migration learning models reveals that the model-based migration learning approach performs better. The reason for this is that the sample-based migration learning approach only eliminates the noisy samples that are relatively less similar to the entrepreneurial borrowing data. However, the calculation of similarity and the weighing of similarity are subjective, and there is no unified judgment standard and operation method, so there is no guarantee that the retained traditional credit samples have the same sample distribution and feature structure as the entrepreneurial borrowing data.

Practical implications

From a practical standpoint, on the one hand, it provides a new solution to the cold start problem of entrepreneurial borrowing risk control. The small number of labeled high-quality samples cannot support the learning and deployment of big data risk control models, which is the cold start problem of the entrepreneurial borrowing risk control system. By extending the training sample set with auxiliary domain data through suitable migration learning methods, the prediction performance of the model can be improved to a certain extent and more generalized laws can be learned.

Originality/value

This paper introduces the thought method of migration learning to the entrepreneurial borrowing scenario, provides a new solution to the cold start problem of the entrepreneurial borrowing risk control system and verifies the feasibility and effectiveness of the migration learning method applied in the risk control field through empirical data.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

1 – 10 of over 1000