Search results

1 – 10 of 323
Article
Publication date: 27 February 2023

Wenfeng Zhang, Ming K. Lim, Mei Yang, Xingzhi Li and Du Ni

As the supply chain is a highly integrated infrastructure in modern business, the risks in supply chain are also becoming highly contagious among the target company. This…

Abstract

Purpose

As the supply chain is a highly integrated infrastructure in modern business, the risks in supply chain are also becoming highly contagious among the target company. This motivates researchers to continuously add new features to the datasets for the credit risk prediction (CRP). However, adding new features can easily lead to missing of the data.

Design/methodology/approach

Based on the gaps summarized from the literature in CRP, this study first introduces the approaches to the building of datasets and the framing of the algorithmic models. Then, this study tests the interpolation effects of the algorithmic model in three artificial datasets with different missing rates and compares its predictability before and after the interpolation in a real dataset with the missing data in irregular time-series.

Findings

The algorithmic model of the time-decayed long short-term memory (TD-LSTM) proposed in this study can monitor the missing data in irregular time-series by capturing more and better time-series information, and interpolating the missing data efficiently. Moreover, the algorithmic model of Deep Neural Network can be used in the CRP for the datasets with the missing data in irregular time-series after the interpolation by the TD-LSTM.

Originality/value

This study fully validates the TD-LSTM interpolation effects and demonstrates that the predictability of the dataset after interpolation is improved. Accurate and timely CRP can undoubtedly assist a target company in avoiding losses. Identifying credit risks and taking preventive measures ahead of time, especially in the case of public emergencies, can help the company minimize losses.

Details

Industrial Management & Data Systems, vol. 123 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 6 November 2023

Thiago Galdino Balista, Carlos Friedrich Loeffler, Luciano Lara and Webe João Mansur

This work compares the performance of the three boundary element techniques for solving Helmholtz problems: dual reciprocity, multiple reciprocity and direct interpolation. All…

Abstract

Purpose

This work compares the performance of the three boundary element techniques for solving Helmholtz problems: dual reciprocity, multiple reciprocity and direct interpolation. All techniques transform domain integrals into boundary integrals, despite using different principles to reach this purpose.

Design/methodology/approach

Comparisons here performed include the solution of eigenvalue and response by frequency scanning, analyzing many features that are not comprehensively discussed in the literature, as follows: the type of boundary conditions, suitable number of degrees of freedom, modal content, number of primitives in the multiple reciprocity method (MRM) and the requirement of internal interpolation points in techniques that use radial basis functions as dual reciprocity and direct interpolation.

Findings

Among the other aspects, this work can conclude that the solution of the eigenvalue and response problems confirmed the reasonable accuracy of the dual reciprocity boundary element method (DRBEM) only for the calculation of the first natural frequencies. Concerning the direct interpolation boundary element method (DIBEM), its interpolation characteristic allows more accessibility for solving more elaborate problems. Despite requiring a greater number of interpolating internal points, the DIBEM has presented higher-quality results for the eigenvalue and response problems. The MRM results were satisfactory in terms of accuracy just for the low range of frequencies; however, the neglected higher-order primitives impact the accuracy of the dynamic response as a whole.

Originality/value

There are safe alternatives for solving engineering stationary dynamic problems using the boundary element method (BEM), but there are no suitable comparisons between these different techniques. This paper presents the particularities and detailed comparisons approaching the accuracy of the three important BEM techniques, aiming at response and frequency evaluation, which are not found in the specialized literature.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 6 November 2023

Daniel E.S. Rodrigues, Jorge Belinha and Renato Natal Jorge

Fused Filament Fabrication (FFF) is an extrusion-based manufacturing process using fused thermoplastics. Despite its low cost, the FFF is not extensively used in high-value…

Abstract

Purpose

Fused Filament Fabrication (FFF) is an extrusion-based manufacturing process using fused thermoplastics. Despite its low cost, the FFF is not extensively used in high-value industrial sectors mainly due to parts' anisotropy (related to the deposition strategy) and residual stresses (caused by successive heating cycles). Thus, this study aims to investigate the process improvement and the optimization of the printed parts.

Design/methodology/approach

In this work, a meshless technique – the Radial Point Interpolation Method (RPIM) – is used to numerically simulate the viscoplastic extrusion process – the initial phase of the FFF. Unlike the FEM, in meshless methods, there is no pre-established relationship between the nodes so the nodal mesh will not face mesh distortions and the discretization can easily be modified by adding or removing nodes from the initial nodal mesh. The accuracy of the obtained results highlights the importance of using meshless techniques in this field.

Findings

Meshless methods show particular relevance in this topic since the nodes can be distributed to match the layer-by-layer growing condition of the printing process.

Originality/value

Using the flow formulation combined with the heat transfer formulation presented here for the first time within an in-house RPIM code, an algorithm is proposed, implemented and validated for benchmark examples.

Article
Publication date: 5 May 2023

Chung-Ping Chang, Song-Fu Hong and Tzu-Guang Chen

In this investigation, a linear encoder system based on the ultrasonic transducer has been proposed. Ultrasonic transducers are usually designed for distance measurements, such as…

Abstract

Purpose

In this investigation, a linear encoder system based on the ultrasonic transducer has been proposed. Ultrasonic transducers are usually designed for distance measurements, such as the time of flight method and sonar system. These applications are defined as discrete-length measurement technologies. The purpose of this study is to develop a continuous displacement measurement system using ultrasonic transducers.

Design/methodology/approach

A modified signal processing based on heterodyne signaling is implemented in this system. In the proposed signal processing, there is an automatic gain control module, a phase-shifting module, a phase detection module, an interpolation module and especially a frequency multiplication module, which can enhance the resolution and reduce the interpolation error simultaneously.

Findings

The proposed system can generate the encoding signals and is compatible with most motion control systems. For the experimental result, the maximum measurement error and standard deviation are about −0.027 and 0.048 mm, respectively. It shows that the proposed encoder system has the potential for displacement measurement tasks.

Originality/value

This study reveals an ultrasonic linear encoder that is capable of generating an incremental encoding signal, accompanied by a corresponding signal processing methodology. In contrast to the conventional heterodyne signal processing approach, the proposed multiplication method effectively reduces the interpolation error that arises because of multiple reflections.

Details

Sensor Review, vol. 43 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 13 October 2023

Kai Wang, Jiaying Liu, Shuai Yang, Jing Guo and Yongzhen Ke

This paper aims to automatically obtain the implant parameter from the CBCT images to improve the outcome of implant planning.

Abstract

Purpose

This paper aims to automatically obtain the implant parameter from the CBCT images to improve the outcome of implant planning.

Design/methodology/approach

This paper proposes automatic simulated dental implant positioning on CBCT images, which can significantly improve the efficiency of implant planning. The authors introduce the fusion point calculation method for the missing tooth's long axis and root axis based on the dental arch line used to obtain the optimal fusion position. In addition, the authors proposed a semi-interactive visualization method of implant parameters that be automatically simulated by the authors' method. If the plan does not meet the doctor's requirements, the final implant plan can be fine-tuned to achieve the optimal effect.

Findings

A series of experimental results show that the method proposed in this paper greatly improves the feasibility and accuracy of the implant planning scheme, and the visualization method of planting parameters improves the planning efficiency and the friendliness of system use.

Originality/value

The proposed method can be applied to dental implant planning software to improve the communication efficiency between doctors, patients and technicians.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 31 July 2023

Shekhar Srivastava, Rajiv Kumar Garg, Anish Sachdeva, Vishal S. Sharma, Sehijpal Singh and Munish Kumar Gupta

Gas metal arc-based directed energy deposition (GMA-DED) process experiences residual stress (RS) developed due to heat accumulation during successive layer deposition as a…

Abstract

Purpose

Gas metal arc-based directed energy deposition (GMA-DED) process experiences residual stress (RS) developed due to heat accumulation during successive layer deposition as a significant challenge. To address that, monitoring of transient temperature distribution concerning time is a critical input. Finite element analysis (FEA) is considered a decisive engineering tool in quantifying temperature and RS in all manufacturing processes. However, computational time and prediction accuracy has always been a matter of concern for FEA-based prediction of responses in the GMA-DED process. Therefore, this study aims to investigate the effect of finite element mesh variations on the developed RS in the GMA-DED process.

Design/methodology/approach

The variation in the element shape functions, i.e. linear- and quadratic-interpolation elements, has been used to model a single-track 10-layered thin-walled component in Ansys parametric design language. Two cases have been proposed in this study: Case 1 has been meshed with the linear-interpolation elements and Case 2 has been meshed with the combination of linear- and quadratic-interpolation elements. Furthermore, the modelled responses are authenticated with the experimental results measured through the data acquisition system for temperature and RS.

Findings

A good agreement of temperature and RS profile has been observed between predicted and experimental values. Considering similar parameters, Case 1 produced an average error of 4.13%, whereas Case 2 produced an average error of 23.45% in temperature prediction. Besides, comparing the longitudinal stress in the transverse direction for Cases 1 and 2 produced an error of 8.282% and 12.796%, respectively.

Originality/value

To avoid the costly and time-taking experimental approach, the experts have suggested the utilization of numerical methods in the design optimization of engineering problems. The FEA approach, however, is a subtle tool, still, it faces high computational cost and low accuracy based on the choice of selected element technology. This research can serve as a basis for the choice of element technology which can predict better responses in the thermo-mechanical modelling of the GMA-DED process.

Details

Rapid Prototyping Journal, vol. 29 no. 10
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 22 May 2023

Edmund Baffoe-Twum, Eric Asa and Bright Awuku

Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the…

Abstract

Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the mining industry; however, it has been successfully applied in diverse scientific disciplines. This technique includes univariate, multivariate, and simulations. Kriging geostatistical methods, simple, ordinary, and universal Kriging, are not multivariate models in the usual statistical function. Notwithstanding, simple, ordinary, and universal kriging techniques utilize random function models that include unlimited random variables while modeling one attribute. The coKriging technique is a multivariate estimation method that simultaneously models two or more attributes defined with the same domains as coregionalization.

Objective: This study investigates the impact of populations on traffic volumes as a variable. The additional variable determines the strength or accuracy obtained when data integration is adopted. In addition, this is to help improve the estimation of annual average daily traffic (AADT).

Methods procedures, process: The investigation adopts the coKriging technique with AADT data from 2009 to 2016 from Montana, Minnesota, and Washington as primary attributes and population as a controlling factor (second variable). CK is implemented for this study after reviewing the literature and work completed by comparing it with other geostatistical methods.

Results, observations, and conclusions: The Investigation employed two variables. The data integration methods employed in CK yield more reliable models because their strength is drawn from multiple variables. The cross-validation results of the model types explored with the CK technique successfully evaluate the interpolation technique's performance and help select optimal models for each state. The results from Montana and Minnesota models accurately represent the states' traffic and population density. The Washington model had a few exceptions. However, the secondary attribute helped yield an accurate interpretation. Consequently, the impact of tourism, shopping, recreation centers, and possible transiting patterns throughout the state is worth exploring.

Details

Emerald Open Research, vol. 1 no. 5
Type: Research Article
ISSN: 2631-3952

Keywords

Article
Publication date: 12 January 2024

Imtiyaz Ahmad Bhat, Lakshmi Narayan Mishra, Vishnu Narayan Mishra, Cemil Tunç and Osman Tunç

This study aims to discuss the numerical solutions of weakly singular Volterra and Fredholm integral equations, which are used to model the problems like heat conduction in…

Abstract

Purpose

This study aims to discuss the numerical solutions of weakly singular Volterra and Fredholm integral equations, which are used to model the problems like heat conduction in engineering and the electrostatic potential theory, using the modified Lagrange polynomial interpolation technique combined with the biconjugate gradient stabilized method (BiCGSTAB). The framework for the existence of the unique solutions of the integral equations is provided in the context of the Banach contraction principle and Bielecki norm.

Design/methodology/approach

The authors have applied the modified Lagrange polynomial method to approximate the numerical solutions of the second kind of weakly singular Volterra and Fredholm integral equations.

Findings

Approaching the interpolation of the unknown function using the aforementioned method generates an algebraic system of equations that is solved by an appropriate classical technique. Furthermore, some theorems concerning the convergence of the method and error estimation are proved. Some numerical examples are provided which attest to the application, effectiveness and reliability of the method. Compared to the Fredholm integral equations of weakly singular type, the current technique works better for the Volterra integral equations of weakly singular type. Furthermore, illustrative examples and comparisons are provided to show the approach’s validity and practicality, which demonstrates that the present method works well in contrast to the referenced method. The computations were performed by MATLAB software.

Research limitations/implications

The convergence of these methods is dependent on the smoothness of the solution, it is challenging to find the solution and approximate it computationally in various applications modelled by integral equations of non-smooth kernels. Traditional analytical techniques, such as projection methods, do not work well in these cases since the produced linear system is unconditioned and hard to address. Also, proving the convergence and estimating error might be difficult. They are frequently also expensive to implement.

Practical implications

There is a great need for fast, user-friendly numerical techniques for these types of equations. In addition, polynomials are the most frequently used mathematical tools because of their ease of expression, quick computation on modern computers and simple to define. As a result, they made substantial contributions for many years to the theories and analysis like approximation and numerical, respectively.

Social implications

This work presents a useful method for handling weakly singular integral equations without involving any process of change of variables to eliminate the singularity of the solution.

Originality/value

To the best of the authors’ knowledge, the authors claim the originality and effectiveness of their work, highlighting its successful application in addressing weakly singular Volterra and Fredholm integral equations for the first time. Importantly, the approach acknowledges and preserves the possible singularity of the solution, a novel aspect yet to be explored by researchers in the field.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 4 July 2023

Jiayu Qin, Nengxiong Xu and Gang Mei

In this paper, the smoothed point interpolation method (SPIM) is used to model the slope deformation. However, the computational efficiency of SPIM is not satisfying when modeling…

Abstract

Purpose

In this paper, the smoothed point interpolation method (SPIM) is used to model the slope deformation. However, the computational efficiency of SPIM is not satisfying when modeling the large-scale nonlinear deformation problems of geological bodies.

Design/methodology/approach

In this paper, the SPIM is used to model the slope deformation. However, the computational efficiency of SPIM is not satisfying when modeling the large-scale nonlinear deformation problems of geological bodies.

Findings

A simple slope model with different mesh sizes is used to verify the performance of the efficient face-based SPIM. The first accelerating strategy greatly enhances the computational efficiency of solving the large-scale slope deformation. The second accelerating strategy effectively improves the convergence of nonlinear behavior that occurred in the slope deformation.

Originality/value

The designed efficient face-based SPIM can enhance the computational efficiency when analyzing large-scale nonlinear slope deformation problems, which can help to predict and prevent potential geological hazards.

Details

Engineering Computations, vol. 40 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 31 October 2022

Seyedeh Mehrangar Hosseini, Behnaz Bahadori and Shahram Charkhan

The purpose of this study is to identify the situation of spatial inequality in the residential system of Tehran city in terms of housing prices in the year 2021 and to examine…

Abstract

Purpose

The purpose of this study is to identify the situation of spatial inequality in the residential system of Tehran city in terms of housing prices in the year 2021 and to examine its changes over time (1991–2021).

Design/methodology/approach

In terms of purpose, this study is applied research and has used a descriptive-analytical method. The statistical population of this research is the residential units in Tehran city 2021. The average per square meter of a residential unit in the level of city neighborhoods was entered in the geographical information system (GIS) in 2021. Moran’s spatial autocorrelation method, map cluster analysis (hot and cold spots) and Kriging interpolation have been used for spatial analysis of points. Then, the change in spatial inequality in the residential system of Tehran city has been studied and measured based on the price per square meter of a residential unit for 30 years in the 22 districts of Tehran by using statistical clustering based on distance with standard deviation.

Findings

The result of spatial autocorrelation analysis with a score of 0.873872 and a p-value equal to 0.000000 indicates a cluster distribution of housing prices throughout the city. The results of hot spots show that the highest concentration of hot spots (the highest price) is in the northern part of the city, and the highest concentration of cold spots (the lowest price) is in the southern part of Tehran city. Calculating the area and estimating the quantitative values of data-free points by the use of the Kriging interpolation method indicates that 9.95% of Tehran’s area has a price of less than US$800, 17.68% of it has a price of US$800 to US$1,200, 25.40% has the price of US$1,200 to US$1,600, 17.61% has the price of US$1,600 to US$2,000, 9.54% has the price of US$2,000 to US$2,200, 6.69% has the price of US$2,200 to US$2,600, 5.38% has the price of US$2,600 to US$2,800, 4.59% has the price of US$2,800 to US$3,200 and finally, the 3.16% has a price more than US$3,200. The highest price concentration (above US$3,200) is in five neighborhoods (Zafaranieh, Mahmoudieh, Tajrish, Bagh-Ferdows and Hesar Bou-Ali). The findings from the study of changes in housing prices in the period (1991–2021) indicate that the southern part of Tehran has grown slightly compared to the average range, and the western part of Tehran, which includes the 21st and 22nd regions with much more growth than the average price.

Originality/value

There is massive inequality in housing prices in different areas and neighborhoods of Tehran city in 2021. In the period under study, spatial inequality in the residential system of Tehran intensified. The considerable increase in housing prices in the housing market of Tehran has made this sector a commodity, intensifying the inequality between owners and non-owners. This increase in housing price inequality has caused an increase in the informal living for the population of the southern part. This population is experiencing a living situation that contrasts with the urban plans and policies.

Details

International Journal of Housing Markets and Analysis, vol. 17 no. 2
Type: Research Article
ISSN: 1753-8270

Keywords

1 – 10 of 323