Search results

1 – 10 of 210
Article
Publication date: 8 May 2024

Lu Xu, Shuang Cao and Xican Li

In order to explore a new estimation approach of hyperspectral estimation, this paper aims to establish a hyperspectral estimation model of soil organic matter content with the…

111

Abstract

Purpose

In order to explore a new estimation approach of hyperspectral estimation, this paper aims to establish a hyperspectral estimation model of soil organic matter content with the principal gradient grey information based on the grey information theory.

Design/methodology/approach

Firstly, the estimation factors are selected by transforming the spectral data. The eigenvalue matrix of the modelling samples is converted into grey information matrix by using the method of increasing information and taking large, and the principal gradient grey information of modelling samples is calculated by using the method of pro-information interpolation and straight-line interpolation, respectively, and the hyperspectral estimation model of soil organic matter content is established. Then, the positive and inverse grey relational degree are used to identify the principal gradient information quantity of the test samples corresponding to the known patterns, and the cubic polynomial method is used to optimize the principal gradient information quantity for improving estimation accuracy. Finally, the established model is used to estimate the soil organic matter content of Zhangqiu and Jiyang District of Jinan City, Shandong Province.

Findings

The results show that the model has the higher estimation accuracy, among the average relative error of 23 test samples is 5.7524%, and the determination coefficient is 0.9002. Compared with the commonly used methods such as multiple linear regression, support vector machine and BP neural network, the hyperspectral estimation accuracy of soil organic matter content is significantly improved. The application example shows that the estimation model proposed in this paper is feasible and effective.

Practical implications

The estimation model in this paper not only fully excavates and utilizes the internal grey information of known samples with “insufficient and incomplete information”, but also effectively overcomes the randomness and grey uncertainty in the spectral estimation. The research results not only enrich the grey system theory and methods, but also provide a new approach for hyperspectral estimation of soil properties such as soil organic matter content, water content and so on.

Originality/value

The paper succeeds in realizing both a new hyperspectral estimation model of soil organic matter content based on the principal gradient grey information and effectively dealing with the randomness and grey uncertainty in spectral estimation.

Article
Publication date: 26 June 2024

Thenysson Matos, Maisa Tonon Bitti Perazzini and Hugo Perazzini

This paper aims to analyze the performance of artificial neural networks with filling methods in predicting the minimum fluidization velocity of different biomass types for…

Abstract

Purpose

This paper aims to analyze the performance of artificial neural networks with filling methods in predicting the minimum fluidization velocity of different biomass types for bioenergy applications.

Design/methodology/approach

An extensive literature review was performed to create an efficient database for training purposes. The database consisted of experimental values of the minimum fluidization velocity, physical properties of the biomass particles (density, size and sphericity) and characteristics of the fluidization (monocomponent experiments or binary mixture). The neural models developed were divided into eight different cases, in which the main difference between them was the filling method type (K-nearest neighbors [KNN] or linear interpolation) and the number of input neurons. The results of the neural models were compared to the classical correlations proposed by the literature and empirical equations derived from multiple regression analysis.

Findings

The performance of a given filling method depended on the characteristics and size of the database. The KNN method was superior for lower available data for training and specific fluidization experiments, like monocomponent or binary mixture. The linear interpolation method was superior for a wider and larger database, including monocomponent and binary mixture. The performance of the neural model was comparable with the predictions of the most well-known correlations from the literature.

Originality/value

Techniques of machine learning, such as filling methods, were used to improve the performance of the neural models. Besides the typical comparisons with conventional correlations, comparisons with three main equations derived from multiple regression analysis were reported and discussed.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 8
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 31 May 2024

Xiuping Li and Ye Yang

Coordinating low-carbonization and digitalization is a practical implementation pathway to achieve high-quality economic development. Regions are under great emission reduction…

Abstract

Purpose

Coordinating low-carbonization and digitalization is a practical implementation pathway to achieve high-quality economic development. Regions are under great emission reduction pressure to achieve low-carbon development. However, why and how regional emission reduction pressure influences enterprise digital transformation is lacking in the literature. This study empirically tests the impact of emission reduction pressure on enterprise digital transformation and its mechanism.

Design/methodology/approach

This article takes the data of non-financial listed companies from 2011 to 2020 as a sample. The digital transformation index is measured by entropy value method. The bidirectional fixed effect model was used to test the hypothesis.

Findings

The research results show that emission reduction pressure forces enterprise digital transformation. The mechanism lies in that emission reduction pressure improves digital transformation by promoting enterprise innovation, and digital economy moderates the nexus between emission reduction pressure and digital transformation. Furthermore, the effect of emission reduction pressure on digital transformation is more significant for non-state-owned, mature and high-tech enterprises.

Originality/value

This paper discusses the mediating role of enterprise innovation between carbon emission reduction pressure and enterprise digital transformation, as well as the moderating role of digital economy. The research expands the body of knowledge about dual carbon targets, digitization and technological innovation. The author’s findings help update the impact of regional digital economy development on enterprise digital transformation. It also provides theoretical guidance for the realization of digital transformation by enterprise innovation.

Details

Business Process Management Journal, vol. 30 no. 5
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 18 January 2024

Jing Tang, Yida Guo and Yilin Han

Coal is a critical global energy source, and fluctuations in its price significantly impact related enterprises' profitability. This study aims to develop a robust model for…

Abstract

Purpose

Coal is a critical global energy source, and fluctuations in its price significantly impact related enterprises' profitability. This study aims to develop a robust model for predicting the coal price index to enhance coal purchase strategies for coal-consuming enterprises and provide crucial information for global carbon emission reduction.

Design/methodology/approach

The proposed coal price forecasting system combines data decomposition, semi-supervised feature engineering, ensemble learning and deep learning. It addresses the challenge of merging low-resolution and high-resolution data by adaptively combining both types of data and filling in missing gaps through interpolation for internal missing data and self-supervision for initiate/terminal missing data. The system employs self-supervised learning to complete the filling of complex missing data.

Findings

The ensemble model, which combines long short-term memory, XGBoost and support vector regression, demonstrated the best prediction performance among the tested models. It exhibited superior accuracy and stability across multiple indices in two datasets, namely the Bohai-Rim steam-coal price index and coal daily settlement price.

Originality/value

The proposed coal price forecasting system stands out as it integrates data decomposition, semi-supervised feature engineering, ensemble learning and deep learning. Moreover, the system pioneers the use of self-supervised learning for filling in complex missing data, contributing to its originality and effectiveness.

Details

Data Technologies and Applications, vol. 58 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 9 February 2024

Chao Xia, Bo Zeng and Yingjie Yang

Traditional multivariable grey prediction models define the background-value coefficients of the dependent and independent variables uniformly, ignoring the differences between…

Abstract

Purpose

Traditional multivariable grey prediction models define the background-value coefficients of the dependent and independent variables uniformly, ignoring the differences between their physical properties, which in turn affects the stability and reliability of the model performance.

Design/methodology/approach

A novel multivariable grey prediction model is constructed with different background-value coefficients of the dependent and independent variables, and a one-to-one correspondence between the variables and the background-value coefficients to improve the smoothing effect of the background-value coefficients on the sequences. Furthermore, the fractional order accumulating operator is introduced to the new model weaken the randomness of the raw sequence. The particle swarm optimization (PSO) algorithm is used to optimize the background-value coefficients and the order of the model to improve model performance.

Findings

The new model structure has good variability and compatibility, which can achieve compatibility with current mainstream grey prediction models. The performance of the new model is compared and analyzed with three typical cases, and the results show that the new model outperforms the other two similar grey prediction models.

Originality/value

This study has positive implications for enriching the method system of multivariable grey prediction model.

Details

Grey Systems: Theory and Application, vol. 14 no. 3
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 19 August 2024

Walaa Metwally Kandil, Fawzi H. Zarzoura, Mahmoud Salah Goma and Mahmoud El-Mewafi El-Mewafi Shetiwi

This study aims to present a new rapid enhancement digital elevation model (DEM) framework using Google Earth Engine (GEE), machine learning, weighted interpolation and spatial…

Abstract

Purpose

This study aims to present a new rapid enhancement digital elevation model (DEM) framework using Google Earth Engine (GEE), machine learning, weighted interpolation and spatial interpolation techniques with ground control points (GCPs), where high-resolution DEMs are crucial spatial data that find extensive use in many analyses and applications.

Design/methodology/approach

First, rapid-DEM imports Shuttle Radar Topography Mission (SRTM) data and Sentinel-2 multispectral imagery from a user-defined time and area of interest into GEE. Second, SRTM with the feature attributes from Sentinel-2 multispectral imagery is generated and used as input data in support vector machine classification algorithm. Third, the inverse probability weighted interpolation (IPWI) approach uses 12 fixed GCPs as additional input data to assign the probability to each pixel of the image and generate corrected SRTM elevations. Fourth, gridding the enhanced DEM consists of regular points (E, N and H), and the contour interval is 5 m. Finally, densification of enhanced DEM data with GCPs is obtained using global positioning system technique through spatial interpolations such as Kriging, inverse distance weighted, modified Shepard’s method and triangulation with linear interpolation techniques.

Findings

The results were compared to a 1-m vertically accurate reference DEM (RD) obtained by image matching with Worldview-1 stereo satellite images. The results of this study demonstrated that the root mean square error (RMSE) of the original SRTM DEM was 5.95 m. On the other hand, the RMSE of the estimated elevations by the IPWI approach has been improved to 2.01 m, and the generated DEM by Kriging technique was 1.85 m, with a reduction of 68.91%.

Originality/value

A comparison with the RD demonstrates significant SRTM improvements. The suggested method clearly reduces the elevation error of the original SRTM DEM.

Article
Publication date: 25 July 2024

Gang Peng

This paper aims to construct positivity-preserving finite volume schemes for the three-dimensional convection–diffusion equation that are applicable to arbitrary polyhedral grids.

Abstract

Purpose

This paper aims to construct positivity-preserving finite volume schemes for the three-dimensional convection–diffusion equation that are applicable to arbitrary polyhedral grids.

Design/methodology/approach

The cell vertices are used to define the auxiliary unknowns, and the primary unknowns are defined at cell centers. The diffusion flux is discretized by the classical nonlinear two-point flux approximation. To ensure the fully discrete scheme has positivity-preserving property, an improved discretization method for the convection flux was presented. Besides, a new positivity-preserving vertex interpolation method is derived from the linear reconstruction in the discretization of convection flux. Moreover, the Picard iteration method may have slow convergence in solving the nonlinear system. Thus, the Anderson acceleration of Picard iteration method is used to solve the nonlinear system. A condition number monitor of matrix is employed in the Anderson acceleration method to achieve better robustness.

Findings

The new scheme is applicable to arbitrary polyhedral grids and has a second-order accuracy. The results of numerical experiments also confirm the positivity-preserving of the discretization scheme.

Originality/value

1. This article presents a new positivity-preserving finite volume scheme for the 3D convection–diffusion equation. 2. The new discretization scheme of convection flux is constructed. 3. A new second-order interpolation algorithm is given to eliminate the auxiliary unknowns in flux expressions. 4. An improved Anderson acceleration method is applied to accelerate the convergence of Picard iterations. 5. This scheme can solve the convection–diffusion equation on the distorted meshes with second-order accuracy.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 15 April 2024

Seyed Abbas Rajaei, Afshin Mottaghi, Hussein Elhaei Sahar and Behnaz Bahadori

This study aims to investigate the spatial distribution of housing prices and identify the affecting factors (independent variable) on the cost of residential units (dependent…

Abstract

Purpose

This study aims to investigate the spatial distribution of housing prices and identify the affecting factors (independent variable) on the cost of residential units (dependent variable).

Design/methodology/approach

The method of the present study is descriptive-analytical and has an applied purpose. The used statistical population in this study is the residential units’ price in Tehran in 2021. For this purpose, the average per square meter of residential units in the city neighborhoods was entered in the geographical information system. Two techniques of ordinary least squares regression and geographically weighted regression have been used to analyze housing prices and modeling. Then, the results of the ordinary least squares regression and geographically weighted regression models were compared by using the housing price interpolation map predicted in each model and the accurate housing price interpolation map.

Findings

Based on the results, the ordinary least squares regression model has poorly modeled housing prices in the study area. The results of the geographically weighted regression model show that the variables (access rate to sports fields, distance from gas station and water station) have a direct and significant effect. Still, the variable (distance from fault) has a non-significant impact on increasing housing prices at a city level. In addition, to identify the affecting variables of housing prices, the results confirm the desirability of the geographically weighted regression technique in terms of accuracy compared to the ordinary least squares regression technique in explaining housing prices. The results of this study indicate that the housing prices in Tehran are affected by the access level to urban services and facilities.

Originality/value

Identifying factors affecting housing prices helps create sustainable housing in Tehran. Building sustainable housing represents spending less energy during the construction process together with the utilization phase, which ultimately provides housing at an acceptable price for all income deciles. In housing construction, the more you consider the sustainable housing principles, the more sustainable housing you provide and you take a step toward sustainable development. Therefore, sustainable housing is an important planning factor for local authorities and developers. As a result, it is necessary to institutionalize an integrated vision based on the concepts of sustainable development in the field of housing in the Tehran metropolis.

Details

International Journal of Housing Markets and Analysis, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1753-8270

Keywords

Article
Publication date: 10 May 2024

Hongshuai Guo, Shuyou Zhang, Nan Zhang, Xiaojian Liu and Guodong Yi

The step effect and support structure generated by the manufacturing process of fused deposition molding parts increase the consumables cost and decrease the printing quality…

Abstract

Purpose

The step effect and support structure generated by the manufacturing process of fused deposition molding parts increase the consumables cost and decrease the printing quality. Multiorientation printing helps improve the surface quality of parts and reduce support, but path interference exists between the printing layer and the layers printed. The purpose of this study is to design printing paths between different submodels to avoid interference when build orientation changed.

Design/methodology/approach

Considering support constraint, build orientation sequence is designed for submodels decomposed by model topology. The minimum printing angle between printing layers is analyzed. Initial path through the oriented bounding box is planned and slice interference relationship is then detected according to the projection topology mapping. Based on the relationship matrix of multiorientation slice, feasible path is calculated by directed graph (DG). Final printing path is determined under support constraint and checked by minimum printing angle. The simulation model of the robotic arm is established to verify the accessibility of printing path under the constraint of support and slice.

Findings

The proposed method can reduce support structure, decrease volume error and effectively solve the interference problem of the printing path for multiorientation slice.

Originality/value

The method based on projection topology mapping greatly improves the efficiency of interference detection. A feasible path calculated through DGs ensures the effectiveness of the printing path with the constraint of support and slice.

Details

Robotic Intelligence and Automation, vol. 44 no. 3
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 31 July 2024

Yongqing Ma, Yifeng Zheng, Wenjie Zhang, Baoya Wei, Ziqiong Lin, Weiqiang Liu and Zhehan Li

With the development of intelligent technology, deep learning has made significant progress and has been widely used in various fields. Deep learning is data-driven, and its…

22

Abstract

Purpose

With the development of intelligent technology, deep learning has made significant progress and has been widely used in various fields. Deep learning is data-driven, and its training process requires a large amount of data to improve model performance. However, labeled data is expensive and not readily available.

Design/methodology/approach

To address the above problem, researchers have integrated semi-supervised and deep learning, using a limited number of labeled data and many unlabeled data to train models. In this paper, Generative Adversarial Networks (GANs) are analyzed as an entry point. Firstly, we discuss the current research on GANs in image super-resolution applications, including supervised, unsupervised, and semi-supervised learning approaches. Secondly, based on semi-supervised learning, different optimization methods are introduced as an example of image classification. Eventually, experimental comparisons and analyses of existing semi-supervised optimization methods based on GANs will be performed.

Findings

Following the analysis of the selected studies, we summarize the problems that existed during the research process and propose future research directions.

Originality/value

This paper reviews and analyzes research on generative adversarial networks for image super-resolution and classification from various learning approaches. The comparative analysis of experimental results on current semi-supervised GAN optimizations is performed to provide a reference for further research.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 210