Search results
1 – 10 of over 1000Wael Amin Nasr El-Din, Mona Hassan Mohammed Ali, Gisma Ahmed Madani and Islam Omar Abdel Fattah
Sex and age estimation is important, particularly when information about the deceased is unavailable. There are limited radiological studies investigating side, sex and age…
Abstract
Purpose
Sex and age estimation is important, particularly when information about the deceased is unavailable. There are limited radiological studies investigating side, sex and age differences in normal ankle morphometric parameters. The authors’ goal was to evaluate different ankle joint morphometric measurements and document variations among Egyptians.
Design/methodology/approach
A prospective study was conducted throughout 23 months on 203 (100 males and 103 females) adult Egyptians, aged between 20-69 years old, who were referred for a plain x-ray of bilateral normal ankle joints.
Findings
Ankle parameters showed no statistical difference between both sides, except for tarsal width (TaW) which was significantly higher on right than left side (26.92 ± 2.66 vs 26.18 ± 2.65 mm). Males showed significantly higher morphometric values except for anteroposterior gap (APG) and talus height (TaH) which were significantly higher in females (2.29 ± 0.80 vs 1.80 ± 0.61 mm and 13.01 ± 1.68 vs 11.87 ± 1.91 mm, respectively). There was significant increase in tibial arc length, APG, distance of level of MTiTh from anterior limit of mortise, distance of level of MTiTh from vertex of mortise, sagittal distance between tibial and talar vertices and sagittal radius of trochlea tali arc in old age group compared to young one. A significant decrease in tibial width, malleolar width, TaW and TaH was noted in old age group compared to young one.
Originality/value
Ankle joints of both sides are mostly symmetrical; however, there are significant differences in most morphometric values due to sex and age factors. These findings may be essential during side, sex and age determination.
Details
Keywords
Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…
Abstract
Purpose
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.
Design/methodology/approach
On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.
Findings
The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.
Originality/value
The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.
Details
Keywords
Aicha Gasmi, Marc Heran, Noureddine Elboughdiri, Lioua Kolsi, Djamel Ghernaout, Ahmed Hannachi and Alain Grasmick
The main purpose of this study resides essentially in the development of a new tool to quantify the biomass in the bioreactor operating under steady state conditions.
Abstract
Purpose
The main purpose of this study resides essentially in the development of a new tool to quantify the biomass in the bioreactor operating under steady state conditions.
Design/methodology/approach
Modeling is the most relevant tool for understanding the functioning of some complex processes such as biological wastewater treatment. A steady state model equation of activated sludge model 1 (ASM1) was developed, especially for autotrophic biomass (XBA) and for oxygen uptake rate (OUR). Furthermore, a respirometric measurement, under steady state and endogenous conditions, was used as a new tool for quantifying the viable biomass concentration in the bioreactor.
Findings
The developed steady state equations simplified the sensitivity analysis and allowed the autotrophic biomass (XBA) quantification. Indeed, the XBA concentration was approximately 212 mg COD/L and 454 mgCOD/L for SRT, equal to 20 and 40 d, respectively. Under the steady state condition, monitoring of endogenous OUR permitted biomass quantification in the bioreactor. Comparing XBA obtained by the steady state equation and respirometric tool indicated a percentage deviation of about 3 to 13%. Modeling bioreactor using GPS-X showed an excellent agreement between simulation and experimental measurements concerning the XBA evolution.
Originality/value
These results confirmed the importance of respirometric measurements as a simple and available tool for quantifying biomass.
Details
Keywords
Shivangi Viral Thakker, Santosh B. Rane and Vaibhav S. Narwane
Digital supply chains require nascent technologies like blockchain and Internet of Things (IoT). There is a need to develop a roadmap for the implementation of these technologies…
Abstract
Purpose
Digital supply chains require nascent technologies like blockchain and Internet of Things (IoT). There is a need to develop a roadmap for the implementation of these technologies, as they require a huge amount of resources and infrastructure. The purpose of this paper is to analyze the challenges of implementing blockchain-IoT integrated architecture in the green supply chain and develop strategies for the same.
Design/methodology/approach
After a thorough literature survey of Scopus-indexed journals and books, 37 barriers were identified, which were then brought down to 15 barriers after confirming with industry and academic experts using the Delphi method. Using the total interpretive structural modeling (TISM) method and cross-impact matrix multiplication applied to classification (MICMAC) analysis, the barriers were modeled, and finally, strategies were formulated using a concept map to handle the barriers in the blockchain-IoT integrated architecture for a green supply chain.
Findings
This paper presents the research on barriers that can be considered for incorporating blockchain and IoT in the green supply chain. It was found from the TISM model that environmental concerns are Level-1 barriers and need to be addressed by developing appropriate technology and allocating funds for the same. An integrated ecosystem with blockchain and IoT is developed.
Research limitations/implications
The focus of this study was on the challenges of blockchain and IoT; hence, it is required to extend the research and find challenges for different industries and also analyze the criteria using other multi-criteria decision-making (MCDM) methods. Further research is required for the integration of blockchain-IoT with supply chain functions.
Practical implications
The transformation of a traditional supply chain into a green supply chain is possible with the integration of technologies. This research work and the strategies developed are useful to managers and practitioners working on technology implementation. Planning resources and addressing key barriers is possible with the concept maps and architecture developed.
Social implications
Green supply chain management (SCM) is gaining importance in industry as well as the academic sector due to government Policies and norms worldwide for reducing emissions and encouraging environment-friendly production systems. Incorporating blockchain and IoT in a green supply chain will further digitize and increase transparency in supply chains.
Originality/value
We have done a categorization of all barriers based on the expert survey by academicians and industry experts from industries in India. The concept map helps in identifying possible solutions for the challenges and initiatives to be taken for the smooth integration of technologies in the green supply chain.
Details
Keywords
Cotton soliton is a newly introduced notion in the field of Riemannian manifolds. The object of this article is to study the properties of this soliton on certain contact metric…
Abstract
Purpose
Cotton soliton is a newly introduced notion in the field of Riemannian manifolds. The object of this article is to study the properties of this soliton on certain contact metric manifolds.
Design/methodology/approach
The authors consider the notion of Cotton soliton on almost Kenmotsu 3-manifolds. The authors use a local basis of the manifold that helps to study this notion in terms of partial differential equations.
Findings
First the authors consider that the potential vector field is pointwise collinear with the Reeb vector field and prove a non-existence of such Cotton soliton. Next the authors assume that the potential vector field is orthogonal to the Reeb vector field. It is proved that such a Cotton soliton on a non-Kenmotsu almost Kenmotsu 3-h-manifold such that the Reeb vector field is an eigen vector of the Ricci operator is steady and the manifold is locally isometric to.
Originality/value
The results of this paper are new and interesting. Also, the Proposition 3.2 will be helpful in further study of this space.
Details
Keywords
The aggregate index and per capita index have different meanings for some countries or regions. CO2 emissions per capita matters for China because of its huge population…
Abstract
Purpose
The aggregate index and per capita index have different meanings for some countries or regions. CO2 emissions per capita matters for China because of its huge population. Therefore, this study aims to deepen the understanding of Kuznets curve from the perspective of CO2 emissions per capita. In this study, mathematical formulas will be derived and verified.
Design/methodology/approach
First, this study verified the existing problems with the environmental Kuznets curve (EKC) through multiple regression. Second, this study developed a theoretical derivation with the Solow model and balanced growth and explained the underlying principles of the EKC’s shape. Finally, this study quantitatively analyzed the influencing factors.
Findings
The CO2 emission per capita is related to the per capita GDP, nonfossil energy and total factor productivity (TFP). Empirical results support the EKC hypothesis. When the proportion of nonfossil and TFP increase by 1%, the per capita CO2 decrease by 0.041 t and 1.79 t, respectively. The growth rate of CO2 emissions per capita is determined by the difference between the growth rate of output per capita and the sum of efficiency and structural growth rates. To achieve the CO2 emission intensity target and economic growth target, the growth rate of per capita CO2 emissions must fall within the range of [−0.92%, 6.1%].
Originality/value
Inspired by the EKC and balanced growth, this study investigated the relationships between China’s environmental variables (empirical analysis) and developed a theoretical background (macro-theoretical derivation) through formula-based derivation, the results of which are universally valuable and provide policymakers with a newly integrated view of emission reduction and balanced development to address the challenges associated with climate change caused by energy.
Details
Keywords
Giovanna Gavana, Pietro Gottardo and Anna Maria Moisello
The aim of this paper is to examine the effect of structural and demographic board diversity as well as board tenure on family firms' environmental performance, by analyzing the…
Abstract
Purpose
The aim of this paper is to examine the effect of structural and demographic board diversity as well as board tenure on family firms' environmental performance, by analyzing the differences between family and non-family businesses and within family firms.
Design/methodology/approach
Tobit regressions are applied to investigate the effect of independent directors, CEO non-duality, board gender diversity and board tenure on environmental performance. The study also controls for other board and firm characteristics, as well as for time, industry and country-fixed effects. In doing so, the authors rely on a sample of non-financial listed firms from France, Germany, Italy, Spain and Portugal over the period 2014–2021.
Findings
The authors find that women on the board positively influence environmental performance and this effect is significant only in family firms, although board tenure negatively moderates the relationship. Board independence significantly affects environmental performance only in non-family firms. A strong presence of family directors has a negative effect on family firms' environmental performance, especially when directors' turnover is low.
Originality/value
This paper examines the unexplored relationship between structural board diversity and environmental performance in family companies. This study provides empirical evidence on the association between gender diversity and family firms' environmental performance focusing for the first time on a European setting. Moreover, this study provides evidence of a different effect of board tenure in family and non-family businesses.
Details
Keywords
Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…
Abstract
Purpose
Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.
Design/methodology/approach
The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.
Findings
On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.
Originality/value
A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.
Details
Keywords
Reynolds-averaged Navier–Stokes (RANS) models often perform poorly in shock/turbulence interaction regions, resulting in excessive wall heat load and incorrect representation of…
Abstract
Purpose
Reynolds-averaged Navier–Stokes (RANS) models often perform poorly in shock/turbulence interaction regions, resulting in excessive wall heat load and incorrect representation of the separation length in shockwave/turbulent boundary layer interactions. The authors suggest that this can be traced back to inadequate numerical treatment of the inviscid fluxes. The purpose of this study is an extension to the well-known Harten, Lax, van Leer, Einfeldt (HLLE) Riemann solver to overcome this issue.
Design/methodology/approach
It explicitly takes into account the broadening of waves due to the averaging procedure, which adds numerical dissipation and reduces excessive turbulence production across shocks. The scheme is derived based on the HLLE equations, and it is tested against three numerical experiments.
Findings
Sod’s shock tube case shows that the scheme succeeds in reducing turbulence amplification across shocks. A shock-free turbulent flat plate boundary layer indicates that smooth flow at moderate turbulence intensity is largely unaffected by the scheme. A shock/turbulent boundary layer interaction case with higher turbulence intensity shows that the added numerical dissipation can, however, impair the wall heat flux distribution.
Originality/value
The proposed scheme is motivated by implicit large eddy simulations that use numerical dissipation as subgrid-scale model. Introducing physical aspects of turbulence into the numerical treatment for RANS simulations is a novel approach.
Details
Keywords
Alexander Schugardt, Louis Kaiser, Fatih Avcilar and Uwe Schäfer
This paper aims to present an interactive design and simulation tool for permanent magnet synchronous machines based on the finite-element-method. The tool is intended for…
Abstract
Purpose
This paper aims to present an interactive design and simulation tool for permanent magnet synchronous machines based on the finite-element-method. The tool is intended for education and research on electrical machines.
Design/methodology/approach
A coupling between the software MATLAB and finite element method magnetics is used. Several functionalities are included as modular scripts and represented in the form of a graphical user interface. Included are fully parametrized motor models, automatic winding generations and the evaluation of torque waveforms, core losses and speed-torque-diagrams. A survey was conducted to determine how the motivation of students concerning the covered topics is influenced by using the tool.
Findings
Due to its simplicity and the intuitive visualization of the results, the tool provides direct access to the topic of electrical machines without having to deal with separate scripts. The modular structure of the software allows simple extensions with new functions. Because students can directly contribute to the tool with their own work, their motivation for using and extending it increases.
Originality/value
The presented tool offers more functionalities compared to similar free software packages, e.g. the calculation of core losses and speed-torque diagrams. Also, it is designed in such a way that it can be easily understood and extended by students.
Details