Search results

1 – 10 of over 2000
Open Access
Article
Publication date: 31 January 2024

Manuel Castelo Castelo Branco, Delfina Gomes and Adelaide Martins

The purpose of this study is to contribute to the discussion surrounding the definition of accounting proposed by Carnegie et al. (2021a, 2021b) and further elaborated by Carnegie…

Abstract

Purpose

The purpose of this study is to contribute to the discussion surrounding the definition of accounting proposed by Carnegie et al. (2021a, 2021b) and further elaborated by Carnegie et al. (2023) from/under an institutionalist political-economy (IPE) based foundation and to specifically extend this approach to the arena of social and environmental accounting (SEA).

Design/methodology/approach

By adopting an IPE approach to SEA, this study offers a critique of the use of the notion of capital to refer to nature and people in SEA frameworks and standards.

Findings

A SEA framework based on the capabilities approach is proposed based on the concepts of human capabilities and global commons for the purpose of preserving the commons and enabling the flourishing of present and future generations.

Practical implications

The proposed framework allows the engagement of accounting community, in particular SEA researchers, with and contribution to such well-established initiatives as the Planetary Boundaries framework and the human development reports initiative of the United Nations Development Programme.

Originality/value

Based on the capability approach, this study applies Carnegie et al.’s (2023) framework to SEA. This new approach more attuned to the pursuit of sustainable human development and the sustainable development goals, may contribute to turning accounting into a major positive force through its impacts on the world, expressly upon organisations, people and nature.

Details

Meditari Accountancy Research, vol. 32 no. 7
Type: Research Article
ISSN: 2049-372X

Keywords

Article
Publication date: 28 March 2024

Elisa Gonzalez Santacruz, David Romero, Julieta Noguez and Thorsten Wuest

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework…

Abstract

Purpose

This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework (IQ4.0F) for quality improvement (QI) based on Six Sigma and machine learning (ML) techniques towards ZDM. The IQ4.0F aims to contribute to the advancement of defect prediction approaches in diverse manufacturing processes. Furthermore, the work enables a comprehensive analysis of process variables influencing product quality with emphasis on the use of supervised and unsupervised ML techniques in Six Sigma’s DMAIC (Define, Measure, Analyze, Improve and Control) cycle stage of “Analyze.”

Design/methodology/approach

The research methodology employed a systematic literature review (SLR) based on PRISMA guidelines to develop the integrated framework, followed by a real industrial case study set in the automotive industry to fulfill the objectives of verifying and validating the proposed IQ4.0F with primary data.

Findings

This research work demonstrates the value of a “stepwise framework” to facilitate a shift from conventional quality management systems (QMSs) to QMSs 4.0. It uses the IDEF0 modeling methodology and Six Sigma’s DMAIC cycle to structure the steps to be followed to adopt the Quality 4.0 paradigm for QI. It also proves the worth of integrating Six Sigma and ML techniques into the “Analyze” stage of the DMAIC cycle for improving defect prediction in manufacturing processes and supporting problem-solving activities for quality managers.

Originality/value

This research paper introduces a first-of-its-kind Quality 4.0 framework – the IQ4.0F. Each step of the IQ4.0F was verified and validated in an original industrial case study set in the automotive industry. It is the first Quality 4.0 framework, according to the SLR conducted, to utilize the principal component analysis technique as a substitute for “Screening Design” in the Design of Experiments phase and K-means clustering technique for multivariable analysis, identifying process parameters that significantly impact product quality. The proposed IQ4.0F not only empowers decision-makers with the knowledge to launch a Quality 4.0 initiative but also provides quality managers with a systematic problem-solving methodology for quality improvement.

Details

The TQM Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1754-2731

Keywords

Open Access
Article
Publication date: 13 March 2024

Lina Gharaibeh, Kristina Eriksson and Björn Lantz

Perceived benefits of building information modelling (BIM) have been discussed for some time, but cost–benefit benchmarking has been inconsistent. The purpose of this paper is to…

Abstract

Purpose

Perceived benefits of building information modelling (BIM) have been discussed for some time, but cost–benefit benchmarking has been inconsistent. The purpose of this paper is to investigate BIM feasibility and evaluate investment worth to elucidate and develop the current understanding of BIM merit. The aim of the study is to propose a research agenda towards a more holistic perspective of BIM use incorporating quantifying investment return.

Design/methodology/approach

An in-depth examination of research patterns has been conducted to identify challenges in the assessment of the investment value and return on investment (ROI) for BIM in the construction industry. A total of 75 research articles were considered for the final literature review. An evaluation of the literature is conducted using a combination of bibliometric analysis and systematic reviews.

Findings

This study, which analysed 75 articles, unveils key findings in quantifying BIM benefits, primarily through ROI calculation. Two major research gaps are identified: the absence of a standardized BIM ROI method and insufficient exploration of intangible benefits. Research focus varies across phases, emphasizing design and construction integration and exploring post-construction phases. The study categorizes quantifiable factors, including productivity, changes and rework reduction, requests for information reduction, schedule efficiency, safety, environmental sustainability and operations and facility management. These findings offer vital insights for researchers and practitioners, enhancing understanding of ’BIM’s financial benefits and signalling areas for further exploration in construction.

Originality/value

The ’study’s outcomes offer the latest insights for researchers and practitioners to create effective approaches for quantifying ’BIM’s financial benefits. Additionally, the proposed research agenda aims to improve the current limited understanding of BIM feasibility and investment worth evaluation. Results of the study could assist practitioners in overcoming limitations associated with BIM investment and economic evaluations in the construction industry.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 29 September 2023

Suraj Goala and Prabir Sarkar

One of the critical reasons for the nonacceptance of additive manufacturing (AM) processes is the lack of understanding and structured knowledge of design for additive…

Abstract

Purpose

One of the critical reasons for the nonacceptance of additive manufacturing (AM) processes is the lack of understanding and structured knowledge of design for additive manufacturing (DfAM). This paper aims to assist designers to select the appropriate AM technology for product development or redesign. Using the suggestion provided by the design assist tool, the user’s design alterations depend on their ability to interpret the suggestion into the design without affecting the design’s primary objective.

Design/methodology/approach

This research reports the development of a tool that evaluates the efficacy values for all seven major standard AM processes by considering design parameters, benchmark standards within the processes and their material efficacies. In this research, the tool provides analytical and visual approaches to suggestion and assistance. Seventeen design parameters and seven benchmarking standards are used to evaluate the proposed product and design quality value. The full factorial design approach has been used to evaluate the DfAM aspects, design quality and design complexity.

Findings

The outcome is evaluated by the product and design quality value, material suit and material-product-design (MPD) value proposed in this work for a comparative assessment of the AM processes for a design. The higher the MPD value, the better the process. The visual aspect of the evaluation uses spider diagrams, which are evaluated analytically to confirm the results’ appropriateness with the proposed methodology.

Originality/value

The data used in the database is assumed to make the study comprehensive. The output aims to help opt for the best process out of the seven AM techniques for better and optimized manufacturing. This, as per the authors’ knowledge, is not available yet.

Article
Publication date: 28 September 2023

Álvaro Rodríguez-Sanz and Luis Rubio-Andrada

An important and challenging question for air transportation regulators and airport operators is the definition and specification of airport capacity. Annual capacity is used for…

Abstract

Purpose

An important and challenging question for air transportation regulators and airport operators is the definition and specification of airport capacity. Annual capacity is used for long-term planning purposes as a degree of available service volume, but it poses several inefficiencies when measuring the true throughput of the system because of seasonal and daily variations of traffic. Instead, airport throughput is calculated or estimated for a short period of time, usually one hour. This brings about a mismatch: air traffic forecasts typically yield annual volumes, whereas capacity is measured on hourly figures. To manage the right balance between airport capacity and demand, annual traffic volumes must be converted into design hour volumes, so that they can be compared with the true throughput of the system. This comparison is a cornerstone in planning new airport infrastructures, as design-period parameters are important for airport planners in anticipating where and when congestion occurs. Although the design hour for airport traffic has historically had a number of definitions, it is necessary to improve the way air traffic design hours are selected. This study aims to provide an empirical analysis of airport capacity and demand, specifically focusing on insights related to air traffic design hours and the relationship between capacity and delay.

Design/methodology/approach

By reviewing the empirical relationships between hourly and annual air traffic volumes and between practical capacity and delay at 50 European airports during the period 2004–2021, this paper discusses the problem of defining a suitable peak hour for capacity evaluation purposes. The authors use information from several data sources, including EUROCONTROL, ACI and OAG. This study provides functional links between design hours and annual volumes for different airport clusters. Additionally, the authors appraise different daily traffic distribution patterns and their variation by hour of the day.

Findings

The clustering of airports with respect to their capacity, operational and traffic characteristics allows us to discover functional relationships between annual traffic and the percentage of traffic in the design hour. These relationships help the authors to propose empirical methods to derive expected traffic in design hours from annual volumes. The main conclusion is that the percentage of total annual traffic that is concentrated at the design hour maintains a predictable behavior through a “potential” adjustment with respect to the volume of annual traffic. Moreover, the authors provide an experimental link between capacity and delay so that peak hour figures can be related to factors that describe the quality of traffic operations.

Originality/value

The functional relationships between hourly and annual air traffic volumes and between capacity and delay, can be used to properly assess airport expansion projects or to optimize resource allocation tasks. This study offers new evidence on the nature of airport capacity and the dynamics of air traffic design hours and delay.

Details

Aircraft Engineering and Aerospace Technology, vol. 96 no. 1
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 28 March 2024

Anna Young-Ferris, Arunima Malik, Victoria Calderbank and Jubin Jacob-John

Avoided emissions refer to greenhouse gas emission reductions that are a result of using a product or are emission removals due to a decision or an action. Although there is no…

Abstract

Purpose

Avoided emissions refer to greenhouse gas emission reductions that are a result of using a product or are emission removals due to a decision or an action. Although there is no uniform standard for calculating avoided emissions, market actors have started referring to avoided emissions as “Scope 4” emissions. By default, making a claim about Scope 4 emissions gives an appearance that this Scope of emissions is a natural extension of the existing and accepted Scope-based emissions accounting framework. The purpose of this study is to explore the implications of this assumed legitimacy.

Design/methodology/approach

Via a desktop review and interviews, we analyse extant Scope 4 company reporting, associated accounting methodologies and the practical implications of Scope 4 claims.

Findings

Upon examination of Scope 4 emissions and their relationship with Scopes 1, 2 and 3 emissions, we highlight a dynamic and interdependent relationship between quantification, commensuration and standardization in emissions accounting. We find that extant Scope 4 assessments do not fit the established framework for Scope-based emissions accounting. In line with literature on the territorializing nature of accounting, we call for caution about Scope 4 claims that are a distraction from the critical work of reducing absolute emissions.

Originality/value

We examine the implications of assumed alignment and borrowed legitimacy of Scope 4 with Scope-based accounting because Scope 4 is not an actual Scope, but a claim to a Scope. This is as an act of accounting territorialization.

Details

Accounting, Auditing & Accountability Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0951-3574

Keywords

Open Access
Article
Publication date: 21 March 2024

Warisa Thangjai and Sa-Aat Niwitpong

Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty…

Abstract

Purpose

Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty. Their applications encompass economic forecasting, market research, financial forecasting, econometric analysis, policy analysis, financial reporting, investment decision-making, credit risk assessment and consumer confidence surveys. Signal-to-noise ratio (SNR) finds applications in economics and finance across various domains such as economic forecasting, financial modeling, market analysis and risk assessment. A high SNR indicates a robust and dependable signal, simplifying the process of making well-informed decisions. On the other hand, a low SNR indicates a weak signal that could be obscured by noise, so decision-making procedures need to take this into serious consideration. This research focuses on the development of confidence intervals for functions derived from the SNR and explores their application in the fields of economics and finance.

Design/methodology/approach

The construction of the confidence intervals involved the application of various methodologies. For the SNR, confidence intervals were formed using the generalized confidence interval (GCI), large sample and Bayesian approaches. The difference between SNRs was estimated through the GCI, large sample, method of variance estimates recovery (MOVER), parametric bootstrap and Bayesian approaches. Additionally, confidence intervals for the common SNR were constructed using the GCI, adjusted MOVER, computational and Bayesian approaches. The performance of these confidence intervals was assessed using coverage probability and average length, evaluated through Monte Carlo simulation.

Findings

The GCI approach demonstrated superior performance over other approaches in terms of both coverage probability and average length for the SNR and the difference between SNRs. Hence, employing the GCI approach is advised for constructing confidence intervals for these parameters. As for the common SNR, the Bayesian approach exhibited the shortest average length. Consequently, the Bayesian approach is recommended for constructing confidence intervals for the common SNR.

Originality/value

This research presents confidence intervals for functions of the SNR to assess SNR estimation in the fields of economics and finance.

Details

Asian Journal of Economics and Banking, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2615-9821

Keywords

Article
Publication date: 16 February 2024

Neeraj Joshi, Sudeep R. Bapat and Raghu Nandan Sengupta

The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).

Abstract

Purpose

The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).

Design/methodology/approach

We estimate the SSR parameter R = P(X > Y) of the IPD under the minimum risk and bounded risk point estimation problems, where X and Y are strength and stress variables, respectively. The total loss function considered is a combination of estimation error (squared error) and cost, utilizing which we minimize the associated risk in order to estimate the reliability parameter. As no fixed-sample technique can be used to solve the proposed point estimation problems, we propose some “cost and time efficient” adaptive sampling techniques (two-stage and purely sequential sampling methods) to tackle them.

Findings

We state important results based on the proposed sampling methodologies. These include estimations of the expected sample size, standard deviation (SD) and mean square error (MSE) of the terminal estimator of reliability parameters. The theoretical values of reliability parameters and the associated sample size and risk functions are well supported by exhaustive simulation analyses. The applicability of our suggested methodology is further corroborated by a real dataset based on insurance claims.

Originality/value

This study will be useful for scenarios where various logistical concerns are involved in the reliability analysis. The methodologies proposed in this study can reduce the number of sampling operations substantially and save time and cost to a great extent.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 January 2024

Shrutika Sharma, Vishal Gupta, Deepa Mudgal and Vishal Srivastava

Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to…

Abstract

Purpose

Three-dimensional (3D) printing is highly dependent on printing process parameters for achieving high mechanical strength. It is a time-consuming and expensive operation to experiment with different printing settings. The current study aims to propose a regression-based machine learning model to predict the mechanical behavior of ulna bone plates.

Design/methodology/approach

The bone plates were formed using fused deposition modeling (FDM) technique, with printing attributes being varied. The machine learning models such as linear regression, AdaBoost regression, gradient boosting regression (GBR), random forest, decision trees and k-nearest neighbors were trained for predicting tensile strength and flexural strength. Model performance was assessed using root mean square error (RMSE), coefficient of determination (R2) and mean absolute error (MAE).

Findings

Traditional experimentation with various settings is both time-consuming and expensive, emphasizing the need for alternative approaches. Among the models tested, GBR model demonstrated the best performance in predicting both tensile and flexural strength and achieved the lowest RMSE, highest R2 and lowest MAE, which are 1.4778 ± 0.4336 MPa, 0.9213 ± 0.0589 and 1.2555 ± 0.3799 MPa, respectively, and 3.0337 ± 0.3725 MPa, 0.9269 ± 0.0293 and 2.3815 ± 0.2915 MPa, respectively. The findings open up opportunities for doctors and surgeons to use GBR as a reliable tool for fabricating patient-specific bone plates, without the need for extensive trial experiments.

Research limitations/implications

The current study is limited to the usage of a few models. Other machine learning-based models can be used for prediction-based study.

Originality/value

This study uses machine learning to predict the mechanical properties of FDM-based distal ulna bone plate, replacing traditional design of experiments methods with machine learning to streamline the production of orthopedic implants. It helps medical professionals, such as physicians and surgeons, make informed decisions when fabricating customized bone plates for their patients while reducing the need for time-consuming experimentation, thereby addressing a common limitation of 3D printing medical implants.

Details

Rapid Prototyping Journal, vol. 30 no. 3
Type: Research Article
ISSN: 1355-2546

Keywords

Book part
Publication date: 5 April 2024

Emir Malikov, Shunan Zhao and Jingfang Zhang

There is growing empirical evidence that firm heterogeneity is technologically non-neutral. This chapter extends the Gandhi, Navarro, and Rivers (2020) proxy variable framework…

Abstract

There is growing empirical evidence that firm heterogeneity is technologically non-neutral. This chapter extends the Gandhi, Navarro, and Rivers (2020) proxy variable framework for structurally identifying production functions to a more general case when latent firm productivity is multi-dimensional, with both factor-neutral and (biased) factor-augmenting components. Unlike alternative methodologies, the proposed model can be identified under weaker data requirements, notably, without relying on the typically unavailable cross-sectional variation in input prices for instrumentation. When markets are perfectly competitive, point identification is achieved by leveraging the information contained in static optimality conditions, effectively adopting a system-of-equations approach. It is also shown how one can partially identify the non-neutral production technology in the traditional proxy variable framework when firms have market power.

1 – 10 of over 2000