Search results

1 – 10 of over 2000
Article
Publication date: 4 March 2024

Yongjiang Xue, Wei Wang and Qingzeng Song

The primary objective of this study is to tackle the enduring challenge of preserving feature integrity during the manipulation of geometric data in computer graphics. Our work…

Abstract

Purpose

The primary objective of this study is to tackle the enduring challenge of preserving feature integrity during the manipulation of geometric data in computer graphics. Our work aims to introduce and validate a variational sparse diffusion model that enhances the capability to maintain the definition of sharp features within meshes throughout complex processing tasks such as segmentation and repair.

Design/methodology/approach

We developed a variational sparse diffusion model that integrates a high-order L1 regularization framework with Dirichlet boundary constraints, specifically designed to preserve edge definition. This model employs an innovative vertex updating strategy that optimizes the quality of mesh repairs. We leverage the augmented Lagrangian method to address the computational challenges inherent in this approach, enabling effective management of the trade-off between diffusion strength and feature preservation. Our methodology involves a detailed analysis of segmentation and repair processes, focusing on maintaining the acuity of features on triangulated surfaces.

Findings

Our findings indicate that the proposed variational sparse diffusion model significantly outperforms traditional smooth diffusion methods in preserving sharp features during mesh processing. The model ensures the delineation of clear boundaries in mesh segmentation and achieves high-fidelity restoration of deteriorated meshes in repair tasks. The innovative vertex updating strategy within the model contributes to enhanced mesh quality post-repair. Empirical evaluations demonstrate that our approach maintains the integrity of original, sharp features more effectively, especially in complex geometries with intricate detail.

Originality/value

The originality of this research lies in the novel application of a high-order L1 regularization framework to the field of mesh processing, a method not conventionally applied in this context. The value of our work is in providing a robust solution to the problem of feature degradation during the mesh manipulation process. Our model’s unique vertex updating strategy and the use of the augmented Lagrangian method for optimization are distinctive contributions that enhance the state-of-the-art in geometry processing. The empirical success of our model in preserving features during mesh segmentation and repair presents an advancement in computer graphics, offering practical benefits to both academic research and industry applications.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 4 March 2024

Hillal M. Elshehabey, Andaç Batur Çolak and Abdelraheem Aly

The purpose of this study is to adapt the incompressible smoothed particle hydrodynamics (ISPH) method with artificial intelligence to manage the physical problem of double…

Abstract

Purpose

The purpose of this study is to adapt the incompressible smoothed particle hydrodynamics (ISPH) method with artificial intelligence to manage the physical problem of double diffusion inside a porous L-shaped cavity including two fins.

Design/methodology/approach

The ISPH method solves the nondimensional governing equations of a physical model. The ISPH simulations are attained at different Frank–Kamenetskii number, Darcy number, coupled Soret/Dufour numbers, coupled Cattaneo–Christov heat/mass fluxes, thermal radiation parameter and nanoparticle parameter. An artificial neural network (ANN) is developed using a total of 243 data sets. The data set is optimized as 171 of the data sets were used for training the model, 36 for validation and 36 for the testing phase. The network model was trained using the Levenberg–Marquardt training algorithm.

Findings

The resulting simulations show how thermal radiation declines the temperature distribution and changes the contour of a heat capacity ratio. The temperature distribution is improved, and the velocity field is decreased by 36.77% when the coupled heat Cattaneo–Christov heat/mass fluxes are increased from 0 to 0.8. The temperature distribution is supported, and the concentration distribution is declined by an increase in Soret–Dufour numbers. A rise in Soret–Dufour numbers corresponds to a decreasing velocity field. The Frank–Kamenetskii number is useful for enhancing the velocity field and temperature distribution. A reduction in Darcy number causes a high porous struggle, which reduces nanofluid velocity and improves temperature and concentration distribution. An increase in nanoparticle concentration causes a high fluid suspension viscosity, which reduces the suspension’s velocity. With the help of the ANN, the obtained model accurately predicts the values of the Nusselt and Sherwood numbers.

Originality/value

A novel integration between the ISPH method and the ANN is adapted to handle the heat and mass transfer within a new L-shaped geometry with fins in the presence of several physical effects.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 4
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 8 February 2024

Juho Park, Junghwan Cho, Alex C. Gang, Hyun-Woo Lee and Paul M. Pedersen

This study aims to identify an automated machine learning algorithm with high accuracy that sport practitioners can use to identify the specific factors for predicting Major…

Abstract

Purpose

This study aims to identify an automated machine learning algorithm with high accuracy that sport practitioners can use to identify the specific factors for predicting Major League Baseball (MLB) attendance. Furthermore, by predicting spectators for each league (American League and National League) and division in MLB, the authors will identify the specific factors that increase accuracy, discuss them and provide implications for marketing strategies for academics and practitioners in sport.

Design/methodology/approach

This study used six years of daily MLB game data (2014–2019). All data were collected as predictors, such as game performance, weather and unemployment rate. Also, the attendance rate was obtained as an observation variable. The Random Forest, Lasso regression models and XGBoost were used to build the prediction model, and the analysis was conducted using Python 3.7.

Findings

The RMSE value was 0.14, and the R2 was 0.62 as a consequence of fine-tuning the tuning parameters of the XGBoost model, which had the best performance in forecasting the attendance rate. The most influential variables in the model are “Rank” of 0.247 and “Day of the week”, “Home team” and “Day/Night game” were shown as influential variables in order. The result was shown that the “Unemployment rate”, as a macroeconomic factor, has a value of 0.06 and weather factors were a total value of 0.147.

Originality/value

This research highlights unemployment rate as a determinant affecting MLB game attendance rates. Beyond contextual elements such as climate, the findings of this study underscore the significance of economic factors, particularly unemployment rates, necessitating further investigation into these factors to gain a more comprehensive understanding of game attendance.

Details

International Journal of Sports Marketing and Sponsorship, vol. 25 no. 2
Type: Research Article
ISSN: 1464-6668

Keywords

Article
Publication date: 15 September 2023

Chen Jiang, Ekene Paul Odibelu and Guo Zhou

This paper aims to investigate the performance of two novel numerical methods, the face-based smoothed finite element method (FS-FEM) and the edge-based smoothed finite element…

Abstract

Purpose

This paper aims to investigate the performance of two novel numerical methods, the face-based smoothed finite element method (FS-FEM) and the edge-based smoothed finite element method (ES-FEM), which employ linear tetrahedral elements, for the purpose of strength assessment of a high-speed train hollow axle.

Design/methodology/approach

The calculation of stress for the wheelset, comprising an axle and two wheels, is facilitated through the application of the European axle strength design standard. This standard assists in the implementation of loading and boundary conditions and is exemplified by the typical CRH2 high-speed train wheelset. To evaluate the performance of these two methods, a hollow cylinder cantilever beam is first used as a benchmark to compare the present methods with other existing methods. Then, the strength analysis of a real wheelset model with a hollow axle is performed using different numerical methods.

Findings

The results of deflection and stress show that FS-FEM and ES-FEM offer higher accuracy and better convergence than FEM using linear tetrahedral elements. ES-FEM exhibits a superior performance to that of FS-FEM using linear tetrahedral elements, showing accuracy and convergence close to FEM using hexahedral elements.

Originality/value

This study channels the novel methods (FS-FEM and ES-FEM) in the static stress analysis of a railway wheelset. Based on the careful testing of FS-FEM and ES-FEM, both methods hold promise as more efficient tools for the strength analysis of complex railway structures.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 29 March 2024

Bingbing Qi, Lijun Xu and Xiaogang Liu

The purpose of this paper is to exploit the multiple-Toeplitz matrices reconstruction method combined with quadratic spatial smoothing processing to improve the…

Abstract

Purpose

The purpose of this paper is to exploit the multiple-Toeplitz matrices reconstruction method combined with quadratic spatial smoothing processing to improve the direction-of-arrival (DOA) estimation performance of coherent signals at low signal-to-noise ratio (SNRs).

Design/methodology/approach

An improved multiple-Toeplitz matrices reconstruction method is proposed via quadratic spatial smoothing processing. Our proposed method takes advantage of the available information contained in the auto-covariance matrices of individual Toeplitz matrices and the cross-covariance matrices of different Toeplitz matrices, which results in a higher noise suppression ability.

Findings

Theoretical analysis and simulation results show that, compared with the existing Toeplitz matrix processing methods, the proposed method improves the DOA estimation performance in cases with a low SNR. Especially for the cases with a low SNR and small snapshot number as well as with closely spaced sources, the proposed method can achieve much better performance on estimation accuracy and resolution probability.

Research limitations/implications

The study investigates the possibility of reusing pre-existing designs for the DOA estimation of the coherent signals. The proposed technique enables achieve good estimation performance at low SNRs.

Practical implications

The paper includes implications for the DOA problem at low SNRs in communication systems.

Originality/value

The proposed method proved to be useful for the DOA estimation at low SNR.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 3 July 2023

Mohammed-Alamine El Houssaini, Abdellah Nabou, Abdelali Hadir, Souad El Houssaini and Jamal El Kafi

Ad hoc mobile networks are commonplace in every aspect of our everyday life. They become essential in many industries and have uses in logistics, science and the military…

Abstract

Purpose

Ad hoc mobile networks are commonplace in every aspect of our everyday life. They become essential in many industries and have uses in logistics, science and the military. However, because they operate mostly in open spaces, they are exposed to a variety of dangers. The purpose of this study is to introduce a novel method for detecting the MAC layer misbehavior.

Design/methodology/approach

The proposed novel approach is based on exponential smoothing for throughput prediction to address this MAC layer misbehavior. The real and expected throughput are processed using an exponential smoothing algorithm to identify this attack, and if these metrics exhibit a trending pattern, an alarm is then sent.

Findings

The effect of the IEEE 802.11 MAC layer misbehavior on throughput was examined using the NS-2 network simulator, as well as the approval of our novel strategy. The authors have found that a smoothing factor value that is near to 0 provides a very accurate throughput forecast that takes into consideration the recent history of the updated values of the real value. As for the smoothing factor values that are near to 1, they are used to identify MAC layer misbehavior.

Originality/value

According to the authors’ modest knowledge, this new scheme has not been proposed in the state of the art for the detection of greedy behavior in mobile ad hoc networks.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 8 May 2024

Mengyao Fan, Xiaojing Ma, Lin Li, Xinpeng Xiao and Can Cheng

In this paper, the complex flow evaporation process of droplet impact on the liquid film in a horizontal falling film evaporator is numerically studied based on smoothed particle…

Abstract

Purpose

In this paper, the complex flow evaporation process of droplet impact on the liquid film in a horizontal falling film evaporator is numerically studied based on smoothed particle hydrodynamics (SPH) method. The purpose of this paper is to present the mechanism of the water treatment problem of the falling film evaporation for the high salinity mine water in Xinjiang region of China.

Design/methodology/approach

To effectively characterize the phase transition problem, the particle splitting and merging techniques are introduced. And the particle absorbing layer is proposed to improve the nonphysical aggregation phenomenon caused by the continuous splitting of gas phase particles. The multiresolution model and the artificial viscosity are adopted.

Findings

The SPH model is validated qualitatively with experiment results and then applied to the evaporation of the droplet impact on the liquid film. It is shown that the larger single droplet initial velocity and the smaller single droplet initial temperature difference between the droplet and liquid film improve the liquid film evaporation. The heat transfer effect of a single droplet is preferable to that of multiple droplets.

Originality/value

A multiphase SPH model for evaporation after the droplet impact on the liquid film is developed and validated. The effects of different factors on liquid film evaporation, including single droplet initial velocity, single droplet initial temperature and multiple droplets are investigated.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 31 January 2024

Ali Fazli and Mohammad Hosein Kazemi

This paper aims to propose a new linear parameter varying (LPV) controller for the robot tracking control problem. Using the identification of the robot dynamics in different work…

Abstract

Purpose

This paper aims to propose a new linear parameter varying (LPV) controller for the robot tracking control problem. Using the identification of the robot dynamics in different work space points about modeling trajectory based on the least square of error algorithm, an LPV model for the robotic arm is extracted.

Design/methodology/approach

Parameter set mapping based on parameter component analysis results in a reduced polytopic LPV model that reduces the complexity of the implementation. An approximation of the required torque is computed based on the reduced LPV models. The state-feedback gain of each zone is computed by solving some linear matrix inequalities (LMIs) to sufficiently decrease the time derivative of a Lyapunov function. A novel smoothing method is used for the proposed controller to switch properly in the borders of the zones.

Findings

The polytopic set of the resulting gains creates the smooth switching polytopic LPV (SS-LPV) controller which is applied to the trajectory tracking problem of the six-degree-of-freedom PUMA 560 robotic arm. A sufficient condition ensures that the proposed controller stabilizes the polytopic LPV system against the torque estimation error.

Practical implications

Smoothing of the switching LPV controller is performed by defining some tolerances and creating some quasi-zones in the borders of the main zones leading to the compressed main zones. The proposed torque estimation is not a model-based technique; so the model variation and other disturbances cannot destroy the performance of the suggested controller. The proposed control scheme does not have any considerable computational load, because the control gains are obtained offline by solving some LMIs, and the torque computation is done online by a simple polytopic-based equation.

Originality/value

In this paper, a new SS-LPV controller is addressed for the trajectory tracking problem of robotic arms. Robot workspace is zoned into some main zones in such a way that the number of models in each zone is almost equal. Data obtained from the modeling trajectory is used to design the state-feedback control gain.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 22 January 2024

Md. Tareq Hossain Khondoker, Md. Mehrab Hossain and Ayan Saha

Due to its longer length compared to other construction materials and distinctive stacking patterns, obtaining construction steel bars in congested construction sites with limited…

Abstract

Purpose

Due to its longer length compared to other construction materials and distinctive stacking patterns, obtaining construction steel bars in congested construction sites with limited storage capacity becomes challenging. Lack of storage space in crowded places prompts the need for building steel bar storage choice optimization. Therefore, this study aims to optimize the construction steel bar procurement plan by providing when and how much rebar to order and how to stack different sizes of rebar considering limited storage capacity.

Design/methodology/approach

A novel approach has been presented in this paper by integrating 4D building information modelling (BIM) and mixed-integer linear programming (MILP). This technique uses BIM to retrieve material quantities, including rebar, during the design phase. Following that, activities are scheduled depending on the duration determined by crew productivity data and material quantity. Then, based on the prior price, the price of each unit of rebar is projected for the duration of construction using the exponential smoothing method. After that, the MILP approach is used to generate an optimal steel bar procurement plan for limited storage space following the scheduled rebar-related operations.

Findings

The developed strategy minimizes overall procurement costs and ensures the storage of rebar as per standard guidelines. An optimal rebar procurement and storage plan to construct a six-storied RC frame has been presented in this paper as a demonstrative example to show the effectiveness of the proposed method.

Originality/value

This work partially satisfies a long-sought research need for establishing a comprehensive construction steel bar procurement system, making it a very useful source of information for practitioners and researchers. The proposed method can be used to minimize a key performance limitation that the conventional rebar procurement practice for crowded building sites may experience.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 22 September 2022

Yassine Benrqya and Imad Jabbouri

An important phenomenon often observed in supply chain, known as the bullwhip effect, implies that demand variability increases as we move up in the supply chain. On the other…

Abstract

Purpose

An important phenomenon often observed in supply chain, known as the bullwhip effect, implies that demand variability increases as we move up in the supply chain. On the other hand, the cross-docking is a distribution strategy that eliminates the inventory holding function of the retailer distribution center, where this latter functions as a transfer point rather than a storage point. The purpose of this paper is to analyze the impact of cross-docking strategy compared to traditional warehousing on the bullwhip effect.

Design/methodology/approach

The authors quantify this effect in a three-echelon supply chain consisting of stores, retailer and supplier. They assume that each participant adopts an order up to level policy with an exponential smoothing forecasting scheme. This paper demonstrates mathematically the lower bound of the bullwhip effect reduction in the cross-docking strategy compared to traditional warehousing.

Findings

By simulation, this paper demonstrates that cross-docking reduces the bullwhip effect upstream the chain. This reduction depends on the lead-times, the review periods and the smoothing factor.

Research limitations/implications

A mathematical demonstration cannot be highly generalizable, and this paper should be extended to an empirical investigation where real data can be incorporated in the model. However, the findings of this paper form a foundation for further understanding of the cross-docking strategy and its impact on the bullwhip effect.

Originality/value

This paper fills a gap by proposing a mathematical demonstration and a simulation, to investigate the benefits of implementing cross-docking strategy on the bullwhip effect. This impact has not been studied in the literature.

Details

Journal of Modelling in Management, vol. 18 no. 6
Type: Research Article
ISSN: 1746-5664

Keywords

1 – 10 of over 2000