Search results
1 – 8 of 8Samir Ouchene, Arezki Smaili and Hachimi Fellouah
This paper aims to investigate the problem of estimating the angle of attack (AoA) and relative velocity for vertical axis wind turbine (VAWT) blades from computational fluid…
Abstract
Purpose
This paper aims to investigate the problem of estimating the angle of attack (AoA) and relative velocity for vertical axis wind turbine (VAWT) blades from computational fluid dynamics data.
Design/methodology/approach
Two methods are implemented as function objects within the OpenFOAM framework for estimating the blade’s AoA and relative velocity. For the numerical analysis of the flow around and through the VAWT, 2 D unsteady Reynolds-averaged Navier–Stokes (URANS) simulations are carried out and validated against experimental data.
Findings
To gain a better understanding of the complex flow features encountered by VAWT blades, the determination of the AoA is crucial. Relying on the geometrically-derived AoA may lead to wrong conclusions about blade aerodynamics.
Practical implications
This study can lead to the development of more robust optimization techniques for enhancing the variable-pitch control mechanism of VAWT blades and improving low-order models based on the blade element momentum theory.
Originality/value
Assessment of the reliability of AoA and relative velocity estimation methods for VAWT’ blades at low-Reynolds numbers using URANS turbulence models in the context of dynamic stall and blade–vortex interactions.
Details
Keywords
Weak repeatability is observed in handcrafted keypoints, leading to tracking failures in visual simultaneous localization and mapping (SLAM) systems under challenging scenarios…
Abstract
Purpose
Weak repeatability is observed in handcrafted keypoints, leading to tracking failures in visual simultaneous localization and mapping (SLAM) systems under challenging scenarios such as illumination change, rapid rotation and large angle of view variation. In contrast, learning-based keypoints exhibit higher repetition but entail considerable computational costs. This paper proposes an innovative algorithm for keypoint extraction, aiming to strike an equilibrium between precision and efficiency. This paper aims to attain accurate, robust and versatile visual localization in scenes of formidable complexity.
Design/methodology/approach
SiLK-SLAM initially refines the cutting-edge learning-based extractor, SiLK, and introduces an innovative postprocessing algorithm for keypoint homogenization and operational efficiency. Furthermore, SiLK-SLAM devises a reliable relocalization strategy called PCPnP, leveraging progressive and consistent sampling, thereby bolstering its robustness.
Findings
Empirical evaluations conducted on TUM, KITTI and EuRoC data sets substantiate SiLK-SLAM’s superior localization accuracy compared to ORB-SLAM3 and other methods. Compared to ORB-SLAM3, SiLK-SLAM demonstrates an enhancement in localization accuracy even by 70.99%, 87.20% and 85.27% across the three data sets. The relocalization experiments demonstrate SiLK-SLAM’s capability in producing precise and repeatable keypoints, showcasing its robustness in challenging environments.
Originality/value
The SiLK-SLAM achieves exceedingly elevated localization accuracy and resilience in formidable scenarios, holding paramount importance in enhancing the autonomy of robots navigating intricate environments. Code is available at https://github.com/Pepper-FlavoredChewingGum/SiLK-SLAM.
Details
Keywords
Alana Vandebeek, Wim Voordeckers, Jolien Huybrechts and Frank Lambrechts
The purpose of this study is to examine how informational faultlines on a board affect the management of knowledge owned by directors and the consequences on organizational…
Abstract
Purpose
The purpose of this study is to examine how informational faultlines on a board affect the management of knowledge owned by directors and the consequences on organizational performance. In this study, informational faultlines are defined as hypothetical lines that divide a group into relatively homogeneous subgroups based on the alignment of several informational attributes among board members.
Design/methodology/approach
The study uses unique hand-collected panel data covering 7,247 board members at 106 publicly traded firms to provide strong support for the hypothesized U-shaped relationship. The authors use a fixed effects approach and a system generalized method of moments approach to test the hypothesis.
Findings
The study finds that the relationship between informational faultlines on a board and organizational performance is U shaped, with the least optimal organizational performance experienced when boards have moderate informational faultlines. More specifically, informational faultlines within boards are negatively related to organizational performance across the weak-to-moderate range of informational faultlines and positively related to organizational performance across the moderate-to-strong range.
Research limitations/implications
By explaining the mechanisms through which informational faultlines are related to organizational performance, the authors contribute to the literature in a number of ways. By conceptualizing how the management of knowledge plays an important role in the particular setting of corporate boards, the authors add not only to literature on knowledge management but also to the faultline and corporate governance literature.
Originality/value
This study offers a rationale for prior mixed findings by providing an alternative theoretical basis to explain the effect of informational faultlines within boards on organizational performance. To advance the field, the authors build on the concept of knowledge demonstrability to illuminate how informational faultlines affect the management of knowledge within boards, which will translate to organizational performance.
Details
Keywords
Yunjue Huang, Dezhu Ye and Shulin Xu
The purpose of this paper is to explore the matching relationship between factor endowment and industrial structure, and its impact on economic growth.
Abstract
Purpose
The purpose of this paper is to explore the matching relationship between factor endowment and industrial structure, and its impact on economic growth.
Design/methodology/approach
The assortative matching method is developed to quantitatively measure the matching between factor endowment and industrial structure. A series of empirical tests are then carried out to evaluate the impact on the economic development of the matching.
Findings
1) The matching between factor endowment and industrial structure has a significantly positive impact on economic growth. (2) Economic growth reaches its maximum when the gap between the two sectors narrows to zero. (3) This effect is particularly significant for countries with higher GDP per capita and GNI per capita. (4) The results remain robust after employing a series of tests.
Practical implications
Aggressive industrial policies are not desirable. The optimal industrial structure is the one that complied with the comparative advantage of the given factor endowment in the economy.
Originality/value
So far, there has been a significant lack of an applicable quantitative indicator for measuring the matching between factor endowment and industrial structure, which is essential for conducting empirical tests and providing evidence for related economic theories.
Details
Keywords
Mengkai Liu and Meng Luo
The poor capacity of prefabricated construction cost estimation is the essential reason for the low profitability of the general contractor. Therefore, this study aims to focus on…
Abstract
Purpose
The poor capacity of prefabricated construction cost estimation is the essential reason for the low profitability of the general contractor. Therefore, this study aims to focus on the cost estimation of prefabricated construction as the research object. This research aims to enhance the accuracy of total project cost estimation for general contractors, ultimately leading to improved profitability.
Design/methodology/approach
This study used Vensim PLE software to establish a system dynamics model. In the modeling process, a systematic research review was used to identify cost-influencing factors; ABC classification and the analytic hierarchy process were used to score and determine the weights of influencing factors.
Findings
The total cost error obtained by the model is less than 2% compared with the actual value. It can be used to cost estimation and analysis. The analysis results indicate that there are 7 key factors, among which the prefabrication rate has the most significant impact. Furthermore, the model can provide the extreme range cost; the minimum cost can reduce by 13% from the value in the case. The factor's value can compose a cost control strategy for general contractors.
Practical implications
The cost of prefabricated buildings can be estimated well, and deciding the prefabrication rate is crucial. The cost can be declined by correct cost control strategies when bidding and subcontracting are in process. The strategies can follow the direction of the model.
Originality/value
A systemic, quantitative and qualitative analysis of cost estimation of prefabricated buildings for general contractors has been conducted. A mathematical model has been developed and validated to facilitate more effective cost-control measures.
Details
Keywords
Emmanuel Asafo-Adjei, Anokye M. Adam, Peterson Owusu Junior, Clement Lamboi Arthur and Baba Adibura Seidu
This study investigates information flow of market constituents and global indices at multi-frequencies.
Abstract
Purpose
This study investigates information flow of market constituents and global indices at multi-frequencies.
Design/methodology/approach
The study’s findings were obtained using the Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (I-CEEMDAN)-based cluster analysis executed for Rényi effective transfer entropy (RETE).
Findings
The authors find that significant negative information flows among sustainability equities (SEs) and conventional equities (CEs) at most multi-frequencies, which exacerbates diversification benefits. The information flows are mostly bi-directional, highlighting the importance of stock markets' constituents and their global indices in portfolio construction.
Research limitations/implications
The authors advocate that both SE and CE markets are mostly heterogeneous, revealing some levels of markets inefficiencies.
Originality/value
The empirical literature on CEs is replete with several dynamics, revealing their returns behaviour for diversification purposes, leaving very little to know about the returns behaviour of SE. Wherein, an avalanche of several initiatives on Corporate Social Responsibility (CSR) enjoin firms to operate socially responsible, but investors need to have a clear reason to remain sustainable into the foreseeable future period. Accordingly, the humble desire of investors is the formation of a well-diversified portfolio and would highly demand stocks to the extent that they form a reliable portfolio, especially, amid SEs and/or CEs.
研究目的
本研究擬審查多頻率的及為市場成份的信息流和全球指數。
研究設計/方法/理念
研究人員使用基於改良完全集合經驗模態分解自適應噪聲(Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise)的聚類分析法,取得Rényi有效轉移熵,藉此得到研究結果。
研究結果
我們發現、於大部份多頻率,在持續性股票和傳統股票間有顯著的負信息流動,這會增加多樣化的益處。這些信息流大部份是雙向的,這強調了股票市場成份及其全球指數在構建投資組合上的重要性。
研究的局限/啟示
我們認為持續性股票市場和傳統股票市場大多為異質市場,這顯示了市場的低效率,而且這低效率的程度頗大。
研究的原創性/價值
關於傳統股票的實證性文獻裡是充滿了變革動力的,這顯示了它們以多樣化為目的的回報行為。這使我們對關於持續性股票的回報行為、認識變得實在太少了。於此,大量的企業社會責任的新措施不斷提醒各公司、要本著企業社會責任的理念去營運;但投資者需清晰明白他們為何需在可見的將來保持可持續性。因此,他們卑微的願望是一個較好的多樣化投資組合得以形成,故此他們高度要求股票要有組成可靠投資組合的性質和能力,特別是在持續性股票和/或傳統股票當中。
Details
Keywords
Umair Khan, William Pao, Karl Ezra Salgado Pilario, Nabihah Sallih and Muhammad Rehan Khan
Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime…
Abstract
Purpose
Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime identification.
Design/methodology/approach
A numerical two-phase flow model was validated against experimental data and was used to generate dynamic pressure signals for three different flow regimes. First, four distinct methods were used for feature extraction: discrete wavelet transform (DWT), empirical mode decomposition, power spectral density and the time series analysis method. Kernel Fisher discriminant analysis (KFDA) was used to simultaneously perform dimensionality reduction and machine learning (ML) classification for each set of features. Finally, the Shapley additive explanations (SHAP) method was applied to make the workflow explainable.
Findings
The results highlighted that the DWT + KFDA method exhibited the highest testing and training accuracy at 95.2% and 88.8%, respectively. Results also include a virtual flow regime map to facilitate the visualization of features in two dimension. Finally, SHAP analysis showed that minimum and maximum values extracted at the fourth and second signal decomposition levels of DWT are the best flow-distinguishing features.
Practical implications
This workflow can be applied to opaque pipes fitted with pressure sensors to achieve flow assurance and automatic monitoring of two-phase flow occurring in many process industries.
Originality/value
This paper presents a novel flow regime identification method by fusing dynamic pressure measurements with ML techniques. The authors’ novel DWT + KFDA method demonstrates superior performance for flow regime identification with explainability.
Details
Keywords
The purpose of this paper is to investigate the vehicle-based sensor effect and pavement temperature on road condition assessment, as well as to compute a threshold value for the…
Abstract
Purpose
The purpose of this paper is to investigate the vehicle-based sensor effect and pavement temperature on road condition assessment, as well as to compute a threshold value for the classification of pavement conditions.
Design/methodology/approach
Four sensors were placed on the vehicle’s control arms and one inside the vehicle to collect vibration acceleration data for analysis. The Analysis of Variance (ANOVA) tests were performed to diagnose the effect of the vehicle-based sensors’ placement in the field. To classify road conditions and identify pavement distress (point of interest), the probability distribution was applied based on the magnitude values of vibration data.
Findings
Results from ANOVA indicate that pavement sensing patterns from the sensors placed on the front control arms were statistically significant, and there is no difference between the sensors placed on the same side of the vehicle (e.g., left or right side). A reference threshold (i.e., 1.7 g) was computed from the distribution fitting method to classify road conditions and identify the road distress based on the magnitude values that combine all acceleration along three axes. In addition, the pavement temperature was found to be highly correlated with the sensing patterns, which is noteworthy for future projects.
Originality/value
The paper investigates the effect of pavement sensors’ placement in assessing road conditions, emphasizing the implications for future road condition assessment projects. A threshold value for classifying road conditions was proposed and applied in class assignments (I-17 highway projects).
Details