Search results
1 – 10 of 79Reynolds-averaged Navier–Stokes (RANS) models often perform poorly in shock/turbulence interaction regions, resulting in excessive wall heat load and incorrect representation of…
Abstract
Purpose
Reynolds-averaged Navier–Stokes (RANS) models often perform poorly in shock/turbulence interaction regions, resulting in excessive wall heat load and incorrect representation of the separation length in shockwave/turbulent boundary layer interactions. The authors suggest that this can be traced back to inadequate numerical treatment of the inviscid fluxes. The purpose of this study is an extension to the well-known Harten, Lax, van Leer, Einfeldt (HLLE) Riemann solver to overcome this issue.
Design/methodology/approach
It explicitly takes into account the broadening of waves due to the averaging procedure, which adds numerical dissipation and reduces excessive turbulence production across shocks. The scheme is derived based on the HLLE equations, and it is tested against three numerical experiments.
Findings
Sod’s shock tube case shows that the scheme succeeds in reducing turbulence amplification across shocks. A shock-free turbulent flat plate boundary layer indicates that smooth flow at moderate turbulence intensity is largely unaffected by the scheme. A shock/turbulent boundary layer interaction case with higher turbulence intensity shows that the added numerical dissipation can, however, impair the wall heat flux distribution.
Originality/value
The proposed scheme is motivated by implicit large eddy simulations that use numerical dissipation as subgrid-scale model. Introducing physical aspects of turbulence into the numerical treatment for RANS simulations is a novel approach.
Details
Keywords
Peiman Tavakoli, Ibrahim Yitmen, Habib Sadri and Afshin Taheri
The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin…
Abstract
Purpose
The purpose of this study is to focus on structured data provision and asset information model maintenance and develop a data provenance model on a blockchain-based digital twin smart and sustainable built environment (DT) for predictive asset management (PAM) in building facilities.
Design/methodology/approach
Qualitative research data were collected through a comprehensive scoping review of secondary sources. Additionally, primary data were gathered through interviews with industry specialists. The analysis of the data served as the basis for developing blockchain-based DT data provenance models and scenarios. A case study involving a conference room in an office building in Stockholm was conducted to assess the proposed data provenance model. The implementation utilized the Remix Ethereum platform and Sepolia testnet.
Findings
Based on the analysis of results, a data provenance model on blockchain-based DT which ensures the reliability and trustworthiness of data used in PAM processes was developed. This was achieved by providing a transparent and immutable record of data origin, ownership and lineage.
Practical implications
The proposed model enables decentralized applications (DApps) to publish real-time data obtained from dynamic operations and maintenance processes, enhancing the reliability and effectiveness of data for PAM.
Originality/value
The research presents a data provenance model on a blockchain-based DT, specifically tailored to PAM in building facilities. The proposed model enhances decision-making processes related to PAM by ensuring data reliability and trustworthiness and providing valuable insights for specialists and stakeholders interested in the application of blockchain technology in asset management and data provenance.
Details
Keywords
Tapas Kumar Sethy and Naliniprava Tripathy
This study aims to explore the impact of systematic liquidity risk on the averaged cross-sectional equity return of the Indian equity market. It also examines the effects of…
Abstract
Purpose
This study aims to explore the impact of systematic liquidity risk on the averaged cross-sectional equity return of the Indian equity market. It also examines the effects of illiquidity and decomposed illiquidity on the conditional volatility of the equity market.
Design/methodology/approach
The present study employs the Liquidity Adjusted Capital Asset Pricing Model (LCAPM) for pricing systematic liquidity risk using the Fama & MacBeth cross-sectional regression model in the Indian stock market from January 1, 2012, to March 31, 2021. Further, the study employed an exponential generalized autoregressive conditional heteroscedastic (1,1) model to observe the impact of decomposed illiquidity on the equity market’s conditional volatility. The study also uses the Ordinary Least Square (OLS) model to illuminate the return-volatility-liquidity relationship.
Findings
The study’s findings indicate that the commonality between individual security liquidity and aggregate liquidity is positive, and the covariance of individual security liquidity and the market return negatively affects the expected return. The study’s outcome specifies that illiquidity time series analysis exhibits the asymmetric effect of directional change in return on illiquidity. Further, the study indicates a significant impact of illiquidity and decomposed illiquidity on conditional volatility. This suggests an asymmetric effect of illiquidity shocks on conditional volatility in the Indian stock market.
Originality/value
This study is one of the few studies that used the World Uncertainty Index (WUI) to measure liquidity and market risks as specified in the LCAPM. Further, the findings of the reverse impact of illiquidity and decomposed higher and lower illiquidity on conditional volatility confirm the presence of price informativeness and its immediate effects on illiquidity in the Indian stock market. The study strengthens earlier studies and offers new insights into stock market liquidity to clarify the association between liquidity and stock return for effective policy and strategy formulation that can benefit investors.
Details
Keywords
Sultan Mohammed Althahban, Mostafa Nowier, Islam El-Sagheer, Amr Abd-Elhady, Hossam Sallam and Ramy Reda
This paper comprehensively addresses the influence of chopped strand mat glass fiber-reinforced polymer (GFRP) patch configurations such as geometry, dimensions, position and the…
Abstract
Purpose
This paper comprehensively addresses the influence of chopped strand mat glass fiber-reinforced polymer (GFRP) patch configurations such as geometry, dimensions, position and the number of layers of patches, whether a single or double patch is used and how well debonding the area under the patch improves the strength of the cracked aluminum plates with different crack lengths.
Design/methodology/approach
Single-edge cracked aluminum specimens of 150 mm in length and 50 mm in width were tested using the tensile test. The cracked aluminum specimens were then repaired using GFRP patches with various configurations. A three-dimensional (3D) finite element method (FEM) was adopted to simulate the repaired cracked aluminum plates using composite patches to obtain the stress intensity factor (SIF). The numerical modeling and validation of ABAQUS software and the contour integral method for SIF calculations provide a valuable tool for further investigation and design optimization.
Findings
The width of the GFRP patches affected the efficiency of the rehabilitated cracked aluminum plate. Increasing patch width WP from 5 mm to 15 mm increases the peak load by 9.7 and 17.5%, respectively, if compared with the specimen without the patch. The efficiency of the GFRP patch in reducing the SIF increased as the number of layers increased, i.e. the maximum load was enhanced by 5%.
Originality/value
This study assessed repairing metallic structures using the chopped strand mat GFRP. Furthermore, it demonstrated the superiority of rectangular patches over semicircular ones, along with the benefit of using double patches for out-of-plane bending prevention and it emphasizes the detrimental effect of defects in the bonding area between the patch and the cracked component. This underlines the importance of proper surface preparation and bonding techniques for successful repair.
Graphical abstract
Details
Keywords
Joonho Na, Qia Wang and Chaehwan Lim
The purpose of this study is to analyze the environmental efficiency level and trend of the transportation sector in the upper–mid–downstream of the Yangtze River Economic Belt…
Abstract
Purpose
The purpose of this study is to analyze the environmental efficiency level and trend of the transportation sector in the upper–mid–downstream of the Yangtze River Economic Belt and the JingJinJi region in China and assess the effectiveness of policies for protecting the low-carbon environment.
Design/methodology/approach
This study uses the meta-frontier slack-based measure (SBM) approach to evaluate environmental efficiency, which targets and classifies specific regions into regional groups. First, this study employs the SBM with the undesirable outputs to construct the environmental efficiency measurement models of the four regions under the meta-frontier and group frontiers, respectively. Then, this study uses the technology gap ratio to evaluate the gap between the group frontier and the meta-frontier.
Findings
The analysis reveals several key findings: (1) the JingJinJi region and the downstream of the YEB had achieved the overall optimal production technology in transportation than the other two regions; (2) significant technology gaps in environmental efficiency were observed among these four regions in China; and (3) the downstream region of the YEB exhibited the lowest levels of energy consumption and excessive CO2 emissions.
Originality/value
To evaluate the differences in environmental efficiency resulting from regions and technological gaps in transportation, this study employs the meta-frontier model, which overcomes the limitation of traditional environmental efficiency methods. Furthermore, in the practical, the study provides the advantage of observing the disparities in transportation efficiency performed by the Yangtze River Economic Belt and the Beijing–Tianjin–Hebei regions.
Details
Keywords
Mehmet Kursat Oksuz and Sule Itir Satoglu
Disaster management and humanitarian logistics (HT) play crucial roles in large-scale events such as earthquakes, floods, hurricanes and tsunamis. Well-organized disaster response…
Abstract
Purpose
Disaster management and humanitarian logistics (HT) play crucial roles in large-scale events such as earthquakes, floods, hurricanes and tsunamis. Well-organized disaster response is crucial for effectively managing medical centres, staff allocation and casualty distribution during emergencies. To address this issue, this study aims to introduce a multi-objective stochastic programming model to enhance disaster preparedness and response, focusing on the critical first 72 h after earthquakes. The purpose is to optimize the allocation of resources, temporary medical centres and medical staff to save lives effectively.
Design/methodology/approach
This study uses stochastic programming-based dynamic modelling and a discrete-time Markov Chain to address uncertainty. The model considers potential road and hospital damage and distance limits and introduces an a-reliability level for untreated casualties. It divides the initial 72 h into four periods to capture earthquake dynamics.
Findings
Using a real case study in Istanbul’s Kartal district, the model’s effectiveness is demonstrated for earthquake scenarios. Key insights include optimal medical centre locations, required capacities, necessary medical staff and casualty allocation strategies, all vital for efficient disaster response within the critical first 72 h.
Originality/value
This study innovates by integrating stochastic programming and dynamic modelling to tackle post-disaster medical response. The use of a Markov Chain for uncertain health conditions and focus on the immediate aftermath of earthquakes offer practical value. By optimizing resource allocation amid uncertainties, the study contributes significantly to disaster management and HT research.
Details
Keywords
The paper provides a detailed historical account of Douglass C. North's early intellectual contributions and analytical developments in pursuing a Grand Theory for why some…
Abstract
Purpose
The paper provides a detailed historical account of Douglass C. North's early intellectual contributions and analytical developments in pursuing a Grand Theory for why some countries are rich and others poor.
Design/methodology/approach
The author approaches the discussion using a theoretical and historical reconstruction based on published and unpublished materials.
Findings
The systematic, continuous and profound attempt to answer the Smithian social coordination problem shaped North's journey from being a young serious Marxist to becoming one of the founders of New Institutional Economics. In the process, he was converted in the early 1950s into a rigid neoclassical economist, being one of the leaders in promoting New Economic History. The success of the cliometric revolution exposed the frailties of the movement itself, namely, the limitations of neoclassical economic theory to explain economic growth and social change. Incorporating transaction costs, the institutional framework in which property rights and contracts are measured, defined and enforced assumes a prominent role in explaining economic performance.
Originality/value
In the early 1970s, North adopted a naive theory of institutions and property rights still grounded in neoclassical assumptions. Institutional and organizational analysis is modeled as a social maximizing efficient equilibrium outcome. However, the increasing tension between the neoclassical theoretical apparatus and its failure to account for contrasting political and institutional structures, diverging economic paths and social change propelled the modification of its assumptions and progressive conceptual innovation. In the later 1970s and early 1980s, North abandoned the efficiency view and gradually became more critical of the objective rationality postulate. In this intellectual movement, North's avant-garde research program contributed significantly to the creation of New Institutional Economics.
Details
Keywords
Oscar Y. Moreno Rocha, Paula Pinto, Maria C. Consuegra, Sebastian Cifuentes and Jorge H. Ulloa
This study aims to facilitate access to vascular disease screening for low-income individuals living in remote and conflict areas based on the results of a pilot trial in…
Abstract
Purpose
This study aims to facilitate access to vascular disease screening for low-income individuals living in remote and conflict areas based on the results of a pilot trial in Colombia. Also, to increase the amount of diagnosis training of vascular surgery (VS) in civilians.
Design/methodology/approach
The operation method includes five stages: strategy development and adjustment; translation of the strategy into a real-world setting; operation logistics planning; strategy analysis and adoption. The operation plan worked efficiently in this study’s sample. It demonstrated high sensibility, efficiency and safety in a real-world setting.
Findings
The authors developed and implemented a flow model operating plan for screening vascular pathologies in low-income patients pro bono without proper access to vascular health care. A total of 140 patients from rural areas in Colombia were recruited to a controlled screening session where they underwent serial noninvasive ultrasound assessments conducted by health professionals of different training stages in VS.
Research limitations/implications
The plan was designed to be implemented in remote, conflict areas with limited access to VS care. Vascular injuries are critically important and common among civilians and military forces in regions with active armed conflicts. As this strategy can be modified and adapted to different medical specialties and geographic areas, the authors recommend checking the related legislation and legal aspects of the intended areas where we will implement this tool.
Practical implications
Different sub-specialties can implement the described method to be translated into significant areas of medicine, as the authors can adjust the deployment and execution for the assessment in peripheral areas, conflict zones and other public health crises that require a faster response. This is necessary, as the amount of training to which VS trainees are exposed is low. A simulated exercise offers a novel opportunity to enhance their current diagnostic skills using ultrasound in a controlled environment.
Social implications
Evaluating and assessing patients with limited access to vascular medicine and other specialties can decrease the burden of vascular disease and related complications and increase the number of treatments available for remote communities.
Originality/value
It is essential to assess the most significant number of patients and treat them according to their triage designation. This management is similar to assessment in remote areas without access to a proper VS consult. The authors were able to determine, classify and redirect to therapeutic interventions the patients with positive findings in remote areas with a fast deployment methodology in VS.
Plain language summary
Access to health care is limited due to multiple barriers and the assessment and response, especially in peripheral areas that require a highly skilled team of medical professionals and related equipment. The authors tested a novel mobile assessment tool for remote and conflict areas in a rural zone of Colombia.
Details
Keywords
Andreas Gschwentner, Manfred Kaltenbacher, Barbara Kaltenbacher and Klaus Roppert
Performing accurate numerical simulations of electrical drives, the precise knowledge of the local magnetic material properties is of utmost importance. Due to the various…
Abstract
Purpose
Performing accurate numerical simulations of electrical drives, the precise knowledge of the local magnetic material properties is of utmost importance. Due to the various manufacturing steps, e.g. heat treatment or cutting techniques, the magnetic material properties can strongly vary locally, and the assumption of homogenized global material parameters is no longer feasible. This paper aims to present the general methodology and two different solution strategies for determining the local magnetic material properties using reference and simulation data.
Design/methodology/approach
The general methodology combines methods based on measurement, numerical simulation and solving an inverse problem. Therefore, a sensor-actuator system is used to characterize electrical steel sheets locally. Based on the measurement data and results from the finite element simulation, the inverse problem is solved with two different solution strategies. The first one is a quasi Newton method (QNM) using Broyden's update formula to approximate the Jacobian and the second is an adjoint method. For comparison of both methods regarding convergence and efficiency, an artificial example with a linear material model is considered.
Findings
The QNM and the adjoint method show similar convergence behavior for two different cutting-edge effects. Furthermore, considering a priori information improved the convergence rate. However, no impact on the stability and the remaining error is observed.
Originality/value
The presented methodology enables a fast and simple determination of the local magnetic material properties of electrical steel sheets without the need for a large number of samples or special preparation procedures.
Details