Search results

1 – 10 of 651
Article
Publication date: 17 March 2023

Stewart Jones

This study updates the literature review of Jones (1987) published in this journal. The study pays particular attention to two important themes that have shaped the field over the…

Abstract

Purpose

This study updates the literature review of Jones (1987) published in this journal. The study pays particular attention to two important themes that have shaped the field over the past 35 years: (1) the development of a range of innovative new statistical learning methods, particularly advanced machine learning methods such as stochastic gradient boosting, adaptive boosting, random forests and deep learning, and (2) the emergence of a wide variety of bankruptcy predictor variables extending beyond traditional financial ratios, including market-based variables, earnings management proxies, auditor going concern opinions (GCOs) and corporate governance attributes. Several directions for future research are discussed.

Design/methodology/approach

This study provides a systematic review of the corporate failure literature over the past 35 years with a particular focus on the emergence of new statistical learning methodologies and predictor variables. This synthesis of the literature evaluates the strength and limitations of different modelling approaches under different circumstances and provides an overall evaluation the relative contribution of alternative predictor variables. The study aims to provide a transparent, reproducible and interpretable review of the literature. The literature review also takes a theme-centric rather than author-centric approach and focuses on structured themes that have dominated the literature since 1987.

Findings

There are several major findings of this study. First, advanced machine learning methods appear to have the most promise for future firm failure research. Not only do these methods predict significantly better than conventional models, but they also possess many appealing statistical properties. Second, there are now a much wider range of variables being used to model and predict firm failure. However, the literature needs to be interpreted with some caution given the many mixed findings. Finally, there are still a number of unresolved methodological issues arising from the Jones (1987) study that still requiring research attention.

Originality/value

The study explains the connections and derivations between a wide range of firm failure models, from simpler linear models to advanced machine learning methods such as gradient boosting, random forests, adaptive boosting and deep learning. The paper highlights the most promising models for future research, particularly in terms of their predictive power, underlying statistical properties and issues of practical implementation. The study also draws together an extensive literature on alternative predictor variables and provides insights into the role and behaviour of alternative predictor variables in firm failure research.

Details

Journal of Accounting Literature, vol. 45 no. 2
Type: Research Article
ISSN: 0737-4607

Keywords

Article
Publication date: 5 March 2024

Devender, Paras Ram and Kushal Sharma

The present article aims to investigate the squeeze effects on hematite suspension-based curved annular plates with Rosensweig’s viscosity and Kozeny–Carman’s porous structure…

Abstract

Purpose

The present article aims to investigate the squeeze effects on hematite suspension-based curved annular plates with Rosensweig’s viscosity and Kozeny–Carman’s porous structure under the variable strong magnetic field and slip in the Shliomis model. The variable magnetic field is utilised to retain all magnetic elements within the model. The aforementioned mechanism would have the benefit of generating a maximal field at the system’s required active contact zone.

Design/methodology/approach

The Kozeny–Carman globular sphere model is used for porous facing. Rosensweig’s extension of Einstein’s viscosity is taken into consideration to enhance the fluid’s viscosity, and Beavers and Joseph’s slip boundary conditions are employed to assess the slip effect.

Findings

The pressure and lifting force under squeezing are computed through modification of the Reynolds equation with the addition of Kozeny–Carman’s model-based porosity, Rosensweig’s viscosity, slip and varying magnetic field. The obtained results for the lifting force are very encouraging and have been compared with Einstein’s viscosity-based model.

Originality/value

Researchers so far have carried out problems on lubrication of various sliders considering Einstein’s viscosity only, whereas in our problem, Rosensweig’s viscosity has been taken along with Kozeny–Carman’s porous structure model.

Details

Multidiscipline Modeling in Materials and Structures, vol. 20 no. 2
Type: Research Article
ISSN: 1573-6105

Keywords

Open Access
Article
Publication date: 10 October 2023

Francie Lange, Anna Peters, Dominik K. Kanbach and Sascha Kraus

This study aims to investigate different types of platform providers (PPs) to gain a deeper understanding of the characteristics and underlying logic of this group within…

Abstract

Purpose

This study aims to investigate different types of platform providers (PPs) to gain a deeper understanding of the characteristics and underlying logic of this group within collaborative consumption (CC). As CC occurs with three groups of actors (PP, peer service provider and customer) and is predominantly viewed from the customer perspective, this study offers insights from the under-researched PP perspective.

Design/methodology/approach

This research applies a multiple case study approach and analyzes descriptively and thematically 92 cases of CC PPs gathered through the Crunchbase database.

Findings

The authors derive four archetypes of CC PPs, namely, the hedonist, functionalist, environmentalist and connector, that differ in their offered values, dominating motives and activities across industries.

Research limitations/implications

The authors conceptualize CC by clearly describing the four archetypes and their characteristics. However, further research would benefit from including databases other than Crunchbase.

Practical implications

PPs need to understand their value offerings and customer preferences to develop convincing value propositions and offer engaging activities. PPs would benefit from a more active social media presence to build strong relations with customers and peer service providers to effectively communicate their values.

Originality/value

The paper is pioneering as it encompasses the perspective of CC PPs and operationalizes the concept of CC. The authors address the lack of research on CC by conducting an extensive case study.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 2 August 2023

Aurojyoti Prusty and Amirtham Rajagopal

This study implements the fourth-order phase field method (PFM) for modeling fracture in brittle materials. The weak form of the fourth-order PFM requires C1 basis functions for…

Abstract

Purpose

This study implements the fourth-order phase field method (PFM) for modeling fracture in brittle materials. The weak form of the fourth-order PFM requires C1 basis functions for the crack evolution scalar field in a finite element framework. To address this, non-Sibsonian type shape functions that are nonpolynomial types based on distance measures, are used in the context of natural neighbor shape functions. The capability and efficiency of this method are studied for modeling cracks.

Design/methodology/approach

The weak form of the fourth-order PFM is derived from two governing equations for finite element modeling. C0 non-Sibsonian shape functions are derived using distance measures on a generalized quad element. Then these shape functions are degree elevated with Bernstein-Bezier (BB) patch to get higher-order continuity (C1) in the shape function. The quad element is divided into several background triangular elements to apply the Gauss-quadrature rule for numerical integration. Both fourth-order and second-order PFMs are implemented in a finite element framework. The efficiency of the interpolation function is studied in terms of convergence and accuracy for capturing crack topology in the fourth-order PFM.

Findings

It is observed that fourth-order PFM has higher accuracy and convergence than second-order PFM using non-Sibsonian type interpolants. The former predicts higher failure loads and failure displacements compared to the second-order model due to the addition of higher-order terms in the energy equation. The fracture pattern is realistic when only the tensile part of the strain energy is taken for fracture evolution. The fracture pattern is also observed in the compressive region when both tensile and compressive energy for crack evolution are taken into account, which is unrealistic. Length scale has a certain specific effect on the failure load of the specimen.

Originality/value

Fourth-order PFM is implemented using C1 non-Sibsonian type of shape functions. The derivation and implementation are carried out for both the second-order and fourth-order PFM. The length scale effect on both models is shown. The better accuracy and convergence rate of the fourth-order PFM over second-order PFM are studied using the current approach. The critical difference between the isotropic phase field and the hybrid phase field approach is also presented to showcase the importance of strain energy decomposition in PFM.

Details

Engineering Computations, vol. 40 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 24 May 2023

Rosa Vinciguerra, Francesca Cappellieri, Michele Pizzo and Rosa Lombardi

This paper aims to define a hierarchical and multi-criteria framework based on pillars of the Modernization of Higher Education to evaluate European Accounting Doctoral Programmes…

Abstract

Purpose

This paper aims to define a hierarchical and multi-criteria framework based on pillars of the Modernization of Higher Education to evaluate European Accounting Doctoral Programmes (EADE-Model).

Design/methodology/approach

The authors applied a quali-quantitative methodology based on the analytic hierarchy process and the survey approach. The authors conducted an extensive literature and regulation review to identify the dimensions affecting the quality of Doctoral Programmes, choosing accounting as the relevant and pivotal field. The authors also used the survey to select the most critical quality dimensions and derive their weight to build EADE Model. The validity of the proposed model has been tested through the application to the Italian scenario.

Findings

The findings provide a critical extension of accounting ranking studies constructing a multi-criteria, hierarchical and updated evaluation model recognizing the role of doctoral training in the knowledge-based society. The results shed new light on weak areas apt to be improved and propose potential amendments to enhance the quality standard of ADE.

Practical implications

Theoretical and practical implications of this paper are directed to academics, policymakers and PhD programmes administrators.

Originality/value

The research is original in drafting a hierarchical multi-criteria framework for evaluating ADE in the Higher Education System. This model may be extended to other fields.

Article
Publication date: 29 August 2023

Lili Wu and Shulin Xu

Financial asset return series usually exhibit nonnormal characteristics such as high peaks, heavy tails and asymmetry. Traditional risk measures like standard deviation or…

Abstract

Purpose

Financial asset return series usually exhibit nonnormal characteristics such as high peaks, heavy tails and asymmetry. Traditional risk measures like standard deviation or variance are inadequate for nonnormal distributions. Value at Risk (VaR) is consistent with people's psychological perception of risk. The asymmetric Laplace distribution (ALD) captures the heavy-tailed and biased features of the distribution. VaR is therefore used as a risk measure to explore the problem of VaR-based asset pricing. Assuming returns obey ALD, the study explores the impact of high peaks, heavy tails and asymmetric features of financial asset return data on asset pricing.

Design/methodology/approach

A VaR-based capital asset pricing model (CAPM) was constructed under the ALD that follows the logic of the classical CAPM and derive the corresponding VaR-β coefficients under ALD.

Findings

ALD-based VaR exhibits a minor tail risk than VaR under normal distribution as the mean increases. The theoretical derivation yields a more complex capital asset pricing formula involving β coefficients compared to the traditional CAPM.The empirical analysis shows that the CAPM under ALD can reflect the β-return relationship, and the results are robust. Finally, comparing the two CAPMs reveals that the β coefficients derived in this paper are smaller than those in the traditional CAPM in 69–80% of cases.

Originality/value

The paper uses VaR as a risk measure for financial time series data following ALD to explore asset pricing problems. The findings complement existing literature on the effects of high peaks, heavy tails and asymmetry on asset pricing, providing valuable insights for investors, policymakers and regulators.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 22 April 2022

Lijun Shang, Qingan Qiu, Cang Wu and Yongjun Du

The study aims to design the limited number of random working cycle as a warranty term and propose two types of warranties, which can help manufacturers to ensure the product…

Abstract

Purpose

The study aims to design the limited number of random working cycle as a warranty term and propose two types of warranties, which can help manufacturers to ensure the product reliability during the warranty period. By extending the proposed warranty to the consumer's post-warranty maintenance model, besides the authors investigate two kinds of random maintenance policies to sustain the post-warranty reliability, i.e. random replacement first and random replacement last. By integrating depreciation expense depending on working time, the cost rate is constructed for each random maintenance policy and some special cases are provided by discussing parameters in cost rates. Finally, sensitivities on both the proposed warranty and random maintenance policies are analyzed in numerical experiments.

Design/methodology/approach

The working cycle of products can be monitored by advanced sensors and measuring technologies. By monitoring the working cycle, manufacturers can design warranty policies to ensure product reliability performance and consumers can model the post-warranty maintenance to sustain the post-warranty reliability. In this article, the authors design a limited number of random working cycles as a warranty term and propose two types of warranties, which can help manufacturers to ensure the product reliability performance during the warranty period. By extending a proposed warranty to the consumer's post-warranty maintenance model, the authors investigate two kinds of random replacement policies to sustain the post-warranty reliability, i.e. random replacement first and random replacement last. By integrating a depreciation expense depending on working time, the cost rate is constructed for each random replacement and some special cases are provided by discussing parameters in the cost rate. Finally, sensitivities to both the proposed warranties and random replacements are analyzed in numerical experiments.

Findings

It is shown that the manufacturer can control the warranty cost by limiting number of random working cycle. For the consumer, when the number of random working cycle is designed as a greater warranty limit, the cost rate can be reduced while the post-warranty period can't be lengthened.

Originality/value

The contribution of this article can be highlighted in two key aspects: (1) the authors investigate early warranties to ensure reliability performance of the product which executes successively projects at random working cycles; (2) by integrating random working cycles into the post-warranty period, the authors is the first to investigate random maintenance policy to sustain the post-warranty reliability from the consumer's perspective, which seldom appears in the existing literature.

Details

Journal of Quality in Maintenance Engineering, vol. 29 no. 2
Type: Research Article
ISSN: 1355-2511

Keywords

Open Access
Article
Publication date: 31 July 2023

Jingrui Ge, Kristoffer Vandrup Sigsgaard, Bjørn Sørskot Andersen, Niels Henrik Mortensen, Julie Krogh Agergaard and Kasper Barslund Hansen

This paper proposes a progressive, multi-level framework for diagnosing maintenance performance: rapid performance health checks of key performance for different equipment groups…

Abstract

Purpose

This paper proposes a progressive, multi-level framework for diagnosing maintenance performance: rapid performance health checks of key performance for different equipment groups and end-to-end process diagnostics to further locate potential performance issues. A question-based performance evaluation approach is introduced to support the selection and derivation of case-specific indicators based on diagnostic aspects.

Design/methodology/approach

The case research method is used to develop the proposed framework. The generic parts of the framework are built on existing maintenance performance measurement theories through a literature review. In the case study, empirical maintenance data of 196 emergency shutdown valves (ESDVs) are collected over a two-year period to support the development and validation of the proposed approach.

Findings

To improve processes, companies need a separate performance measurement structure. This paper suggests a hierarchical model in four layers (objective, domain, aspect and performance measurement) to facilitate the selection and derivation of indicators, which could potentially reduce management complexity and help prioritize continuous performance improvement. Examples of new indicators are derived from a case study that includes 196 ESDVs at an offshore oil and gas production plant.

Originality/value

Methodological approaches to deriving various performance indicators have rarely been addressed in the maintenance field. The proposed diagnostic framework provides a structured way to identify and locate process performance issues by creating indicators that can bridge generic evaluation aspects and maintenance data. The framework is highly adaptive as data availability functions are used as inputs to generate indicators instead of passively filtering out non-applicable existing indicators.

Details

International Journal of Quality & Reliability Management, vol. 41 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 19 October 2023

Huaxiang Song

Classification of remote sensing images (RSI) is a challenging task in computer vision. Recently, researchers have proposed a variety of creative methods for automatic recognition…

Abstract

Purpose

Classification of remote sensing images (RSI) is a challenging task in computer vision. Recently, researchers have proposed a variety of creative methods for automatic recognition of RSI, and feature fusion is a research hotspot for its great potential to boost performance. However, RSI has a unique imaging condition and cluttered scenes with complicated backgrounds. This larger difference from nature images has made the previous feature fusion methods present insignificant performance improvements.

Design/methodology/approach

This work proposed a two-convolutional neural network (CNN) fusion method named main and branch CNN fusion network (MBC-Net) as an improved solution for classifying RSI. In detail, the MBC-Net employs an EfficientNet-B3 as its main CNN stream and an EfficientNet-B0 as a branch, named MC-B3 and BC-B0, respectively. In particular, MBC-Net includes a long-range derivation (LRD) module, which is specially designed to learn the dependence of different features. Meanwhile, MBC-Net also uses some unique ideas to tackle the problems coming from the two-CNN fusion and the inherent nature of RSI.

Findings

Extensive experiments on three RSI sets prove that MBC-Net outperforms the other 38 state-of-the-art (STOA) methods published from 2020 to 2023, with a noticeable increase in overall accuracy (OA) values. MBC-Net not only presents a 0.7% increased OA value on the most confusing NWPU set but also has 62% fewer parameters compared to the leading approach that ranks first in the literature.

Originality/value

MBC-Net is a more effective and efficient feature fusion approach compared to other STOA methods in the literature. Given the visualizations of grad class activation mapping (Grad-CAM), it reveals that MBC-Net can learn the long-range dependence of features that a single CNN cannot. Based on the tendency stochastic neighbor embedding (t-SNE) results, it demonstrates that the feature representation of MBC-Net is more effective than other methods. In addition, the ablation tests indicate that MBC-Net is effective and efficient for fusing features from two CNNs.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 22 March 2024

Ravichandran Joghee and Reesa Varghese

The purpose of this article is to study the link between mean shift and inflation coefficient when the underlying null hypothesis is rejected in the analysis of variance (ANOVA…

Abstract

Purpose

The purpose of this article is to study the link between mean shift and inflation coefficient when the underlying null hypothesis is rejected in the analysis of variance (ANOVA) application after the preliminary test on the model specification.

Design/methodology/approach

A new approach is proposed to study the link between mean shift and inflation coefficient when the underlying null hypothesis is rejected in the ANOVA application. First, we determine this relationship from the general perspective of Six Sigma methodology under the normality assumption. Then, the approach is extended to a balanced two-stage nested design with a random effects model in which a preliminary test is used to fix the main test statistic.

Findings

The features of mean-shifted and inflated (but centred) processes with the same specification limits from the perspective of Six Sigma are studied. The shift and inflation coefficients are derived for the two-stage balanced ANOVA model. We obtained good predictions for the process shift, given the inflation coefficient, which has been demonstrated using numerical results and applied to case studies. It is understood that the proposed method may be used as a tool to obtain an efficient variance estimator under mean shift.

Research limitations/implications

In this work, as a new research approach, we studied the link between mean shift and inflation coefficients when the underlying null hypothesis is rejected in the ANOVA. Derivations for these coefficients are presented. The results when the null hypothesis is accepted are also studied. This needs the help of preliminary tests to decide on the model assumptions, and hence the researchers are expected to be familiar with the application of preliminary tests.

Practical implications

After studying the proposed approach with extensive numerical results, we have provided two practical examples that demonstrate the significance of the approach for real-time practitioners. The practitioners are expected to take additional care before deciding on the model assumptions by applying preliminary tests.

Originality/value

The proposed approach is original in the sense that there have been no similar approaches existing in the literature that combine Six Sigma and preliminary tests in ANOVA applications.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of 651