Search results
1 – 10 of 198Stefanía Carolina Posadas, Silvia Ruiz-Blanco, Belen Fernandez-Feijoo and Lara Tarquinio
This paper aims to analyse the impact of the European Union (EU) Directive on the quality of sustainability reporting under the institutional theory lens. Specifically, the…
Abstract
Purpose
This paper aims to analyse the impact of the European Union (EU) Directive on the quality of sustainability reporting under the institutional theory lens. Specifically, the authors evaluate what kind of institutional pressure has the highest impact on the quality of corporate disclosure on sustainability issues.
Design/methodology/approach
The authors build a quality index based on the content analysis of sustainability information disclosed, before and after the transposition of the Directive, by Italian and Spanish companies belonging to different industries. The authors use an OLS regression model to analyse the effect of coercive, normative and mimetic forces on the quality of the sustainability reports.
Findings
The results highlight that normative and mimetic mechanisms positively affect the quality of sustainability reporting, whereas there is no evidence regarding coercive mechanisms, indicating that the new requirements do not provide a significant contribution to the development of better reporting practices, at least in the two analysed countries.
Originality/value
To the best of the authors’ knowledge, this is one of the few studies assessing the quality of sustainability reporting through an analysis involving the period before and after the implementation of the EU Directive. It enriches the literature on institutional theory by analysing how the different dimensions of isomorphism affect the quality of information disclosed by companies according to the EU requirements. It contributes to a better understanding of the impact of the non-financial information Directive, and the results of this paper can be relevant for regulators, practitioners and academia, especially in view of the adoption of the new Corporate Sustainability Reporting Directive proposal.
Details
Keywords
Classification techniques have been applied to many applications in various fields of sciences. There are several ways of evaluating classification algorithms. The analysis of…
Abstract
Classification techniques have been applied to many applications in various fields of sciences. There are several ways of evaluating classification algorithms. The analysis of such metrics and its significance must be interpreted correctly for evaluating different learning algorithms. Most of these measures are scalar metrics and some of them are graphical methods. This paper introduces a detailed overview of the classification assessment measures with the aim of providing the basics of these measures and to show how it works to serve as a comprehensive source for researchers who are interested in this field. This overview starts by highlighting the definition of the confusion matrix in binary and multi-class classification problems. Many classification measures are also explained in details, and the influence of balanced and imbalanced data on each metric is presented. An illustrative example is introduced to show (1) how to calculate these measures in binary and multi-class classification problems, and (2) the robustness of some measures against balanced and imbalanced data. Moreover, some graphical measures such as Receiver operating characteristics (ROC), Precision-Recall, and Detection error trade-off (DET) curves are presented with details. Additionally, in a step-by-step approach, different numerical examples are demonstrated to explain the preprocessing steps of plotting ROC, PR, and DET curves.
Details
Keywords
Jorge Manuel Mercado-Colmenero, M. Dolores La Rubia, Elena Mata-García, Moisés Rodriguez-Santiago and Cristina Martin-Doñate
Because of the anisotropy of the process and the variability in the quality of printed parts, finite element analysis is not directly applicable to recycled materials manufactured…
Abstract
Purpose
Because of the anisotropy of the process and the variability in the quality of printed parts, finite element analysis is not directly applicable to recycled materials manufactured using fused filament fabrication. The purpose of this study is to investigate the numerical-experimental mechanical behavior modeling of the recycled polymer, that is, recyclable polyethylene terephthalate (rPET), manufactured by a deposition FFF process under compressive stresses for new sustainable designs.
Design/methodology/approach
In all, 42 test specimens were manufactured and analyzed according to the ASTM D695-15 standards. Eight numerical analyzes were performed on a real design manufactured with rPET using Young's compression modulus from the experimental tests. Finally, eight additional experimental tests under uniaxial compression loads were performed on the real sustainable design for validating its mechanical behavior versus computational numerical tests.
Findings
As a result of the experimental tests, rPET behaves linearly until it reaches the elastic limit, along each manufacturing axis. The results of this study confirmed the design's structural safety by the load scenario and operating boundary conditions. Experimental and numerical results show a difference of 0.001–0.024 mm, allowing for the rPET to be configured as isotropic in numerical simulation software without having to modify its material modeling equations.
Practical implications
The results obtained are of great help to industry, designers and researchers because they validate the use of recycled rPET for the ecological production of real-sustainable products using MEX technology under compressive stress and its configuration for numerical simulations. Major design companies are now using recycled plastic materials in their high-end designs.
Originality/value
Validation results have been presented on test specimens and real items, comparing experimental material configuration values with numerical results. Specifically, to the best of the authors’ knowledge, no industrial or scientific work has been conducted with rPET subjected to uniaxial compression loads for characterizing experimentally and numerically the material using these results for validating a real case of a sustainable industrial product.
Details
Keywords
Enrique Sanmiguel-Rojas and Ramon Fernandez-Feria
This paper aims to analyze the propulsive performance of small-amplitude pitching foils at very high frequencies with double objectives: to find out scaling laws for the…
Abstract
Purpose
This paper aims to analyze the propulsive performance of small-amplitude pitching foils at very high frequencies with double objectives: to find out scaling laws for the time-averaged thrust and propulsive efficiency at very high frequencies; and to characterize the Strouhal number above which the effect of turbulence on the mean values cannot be neglected.
Design/methodology/approach
The thrust force and propulsive efficiency of a pitching NACA0012 foil at high reduced frequencies (k) and a Reynolds number Re = 16 000 are analyzed using accurate numerical simulations, both assuming laminar flow and using a transition turbulence model. The time-averaged results are validated with available experimental data for k up to about 12 (Strouhal number, St, up to 0.6). This study also compares the present numerical results with the predictions of theoretical models and existing numerical results. For a foil pitching about its quarter chord with amplitude α0 = 8o, the reduced frequency is varied here up to k = 30 (St up to 2), much higher than in any previous numerical or experimental work.
Findings
For this pitch amplitude, turbulence effects are found negligible for St ≲ 0.8, and affecting less than 10% to the time-averaged thrust coefficient
Originality/value
Pitching foils are increasingly studied as efficient propellers and energy harvesting devices. Their performance at very high reduced frequencies has not been sufficiently analyzed before. The authors provide accurate numerical simulations to discern when turbulence is relevant for the computation of the time-averaged thrust and efficiency and how their scaling with the reduced frequency is affected in relation to the laminar-flow predictions. This is relevant because some small-amplitude theoretical models predict high propulsive efficiency of pitching foils at very high frequencies over certain ranges of the structural parameters, and only very accurate numerical simulations may decide on these predictions.
Details
Keywords
Yong Li, Yingchun Zhang, Gongnan Xie and Bengt Ake Sunden
This paper aims to comprehensively clarify the research status of thermal transport of supercritical aviation kerosene, with particular interests in the effect of cracking on heat…
Abstract
Purpose
This paper aims to comprehensively clarify the research status of thermal transport of supercritical aviation kerosene, with particular interests in the effect of cracking on heat transfer.
Design/methodology/approach
A brief review of current research on supercritical aviation kerosene is presented in views of the surrogate model of hydrocarbon fuels, chemical cracking mechanism of hydrocarbon fuels, thermo-physical properties of hydrocarbon fuels, turbulence models, flow characteristics and thermal performances, which indicates that more efforts need to be directed into these topics. Therefore, supercritical thermal transport of n-decane is then computationally investigated in the condition of thermal pyrolysis, while the ASPEN HYSYS gives the properties of n-decane and pyrolysis products. In addition, the one-step chemical cracking mechanism and SST k-ω turbulence model are applied with relatively high precision.
Findings
The existing surrogate models of aviation kerosene are limited to a specific scope of application and their thermo-physical properties deviate from the experimental data. The turbulence models used to implement numerical simulation should be studied to further improve the prediction accuracy. The thermal-induced acceleration is driven by the drastic density change, which is caused by the production of small molecules. The wall temperature of the combustion chamber can be effectively reduced by this behavior, i.e. the phenomenon of heat transfer deterioration can be attenuated or suppressed by thermal pyrolysis.
Originality/value
The issues in numerical studies of supercritical aviation kerosene are clearly revealed, and the conjugation mechanism between thermal pyrolysis and convective heat transfer is initially presented.
Details
Keywords
Thomas Salzberger and Monika Koller
Psychometric analyses of self-administered questionnaire data tend to focus on items and instruments as a whole. The purpose of this paper is to investigate the functioning of the…
Abstract
Purpose
Psychometric analyses of self-administered questionnaire data tend to focus on items and instruments as a whole. The purpose of this paper is to investigate the functioning of the response scale and its impact on measurement precision. In terms of the response scale direction, existing evidence is mixed and inconclusive.
Design/methodology/approach
Three experiments are conducted to examine the functioning of response scales of different direction, ranging from agree to disagree versus from disagree to agree. The response scale direction effect is exemplified by two different latent constructs by applying the Rasch model for measurement.
Findings
The agree-to-disagree format generally performs better than the disagree-to-agree variant with spatial proximity between the statement and the agree-pole of the scale appearing to drive the effect. The difference is essentially related to the unit of measurement.
Research limitations/implications
A careful investigation of the functioning of the response scale should be part of every psychometric assessment. The framework of Rasch measurement theory offers unique opportunities in this regard.
Practical implications
Besides content, validity and reliability, academics and practitioners utilising published measurement instruments are advised to consider any evidence on the response scale functioning that is available.
Originality/value
The study exemplifies the application of the Rasch model to assess measurement precision as a function of the design of the response scale. The methodology raises the awareness for the unit of measurement, which typically remains hidden.
Details
Keywords
Matteo Davide Lorenzo Dalla Vedova and Pier Carlo Berri
The purpose of this paper is to propose a new simplified numerical model, based on a very compact semi-empirical formulation, able to simulate the fluid dynamics behaviors of an…
Abstract
Purpose
The purpose of this paper is to propose a new simplified numerical model, based on a very compact semi-empirical formulation, able to simulate the fluid dynamics behaviors of an electrohydraulic servovalve taking into account several effects due to valve geometry (e.g. flow leakage between spool and sleeve) and operating conditions (e.g. variable supply pressure or water hammer).
Design/methodology/approach
The proposed model simulates the valve performance through a simplified representation, deriving from the linearized approach based on pressure and flow gains, but able to evaluate the mutual interaction between boundary conditions, pressure saturation and leak assessment. Its performance was evaluated comparing with other fluid dynamics numerical models (a detailed physics-based high-fidelity one and other simplified models available in the literature).
Findings
Although still showing some limitations attributable to its simplified formulation, the proposed model overcomes several deficiencies typical of the most common fluid dynamic models available in the literature, describing the water hammer and the nonlinear dependence of the delivery differential pressure with the spool displacement.
Originality/value
Although still based on a simplified formulation with reduced computational costs, the proposed model introduces a new nonlinear approach that, approximating with suitable precision the pressure-flow fluid dynamic characteristic of a servovalve, overcomes the shortcomings typical of such models.
Details
Keywords
Lufei Huang, Liwen Murong and Wencheng Wang
Environmental issues have become an important concern in modern supply chain management. The structure of closed-loop supply chain (CLSC) networks, which considers both forward…
Abstract
Purpose
Environmental issues have become an important concern in modern supply chain management. The structure of closed-loop supply chain (CLSC) networks, which considers both forward and reverse logistics, can greatly improve the utilization of materials and enhance the performance of the supply chain in coping with environmental impacts and cost control.
Design/methodology/approach
A biobjective mixed-integer programming model is developed to achieve the balance between environmental impact control and operational cost reduction. Various factors regarding the capacity level and the environmental level of facilities are incorporated in this study. The scenario-based method and the Epsilon method are employed to solve the stochastic programming model under uncertain demand.
Findings
The proposed stochastic mixed-integer programming (MIP) model is an effective way of formulating and solving the CLSC network design problem. The reliability and precision of the Epsilon method are verified based on the numerical experiments. Conversion efficiency calculation can achieve the trade-off between cost control and CO2 emissions. Managers should pay more attention to activities about facility operation. These nodes might be the main factors of costs and environmental impacts in the CLSC network. Both costs and CO2 emissions are influenced by return rate especially costs. Managers should be discreet in coping with cost control for CO2 emissions barely affected by return rate. It is advisable to convert the double target into a single target by the idea of “Efficiency of CO2 Emissions Control Reduction.” It can provide managers with a way to double-target conversion.
Originality/value
We proposed a biobjective optimization problem in the CLSC network considering environmental impact control and operational cost reduction. The scenario-based method and the Epsilon method are employed to solve the mixed-integer programming model under uncertain demand.
Details
Keywords
Daniel Šandor and Marina Bagić Babac
Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…
Abstract
Purpose
Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.
Design/methodology/approach
For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.
Findings
The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.
Originality/value
This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.
Details
Keywords
Radek Doubrava, Martin Oberthor, Petr Bělský and Bohuslav Cabrnoch
The purpose of this paper is to describe the approach for the design of cowlings for a new fast helicopter from the perspective of airworthiness requirements regarding high-speed…
Abstract
Purpose
The purpose of this paper is to describe the approach for the design of cowlings for a new fast helicopter from the perspective of airworthiness requirements regarding high-speed impact resistance.
Design/methodology/approach
Validated numerical simulation was applied to flat and simple curved test panels. High-speed camera measurement and non-destructive testing (NDT) results were used for verification of the numerical models. The final design was optimized and verified by validated numerical simulation.
Findings
The comparison between numerical simulation based on static material properties with experimental results of high-speed load shows no significant influence of strain rate effect in composite material.
Research limitations/implications
Owing to the sensitivity of the composite material on technology production, the results are limited by the material used and the production technology.
Practical implications
The application of flat and simple curved test panels for the verification and calibration of numerical models allows the optimized final design of the cowling and reduces the risk of structural non-compliance during verification tests.
Originality/value
Numerical models were verified for simulation of the real composite structure based on high-speed camera results and NDT inspection after impact. The proposed numerical model was simplified for application in a complex design and reduced calculation time.
Details