Search results

1 – 10 of 187
Article
Publication date: 28 October 2020

Tobias Filusch

This paper aims to introduce and tests models for point-in-time probability of default (PD) term structures as required by international accounting standards. Corresponding…

Abstract

Purpose

This paper aims to introduce and tests models for point-in-time probability of default (PD) term structures as required by international accounting standards. Corresponding accounting standards prescribe that expected credit losses (ECLs) be recognized for the impairment of financial instruments, for which the probability of default strongly embodies the included default risk. This paper fills the research gap resulting from a lack of models that expand upon existing risk management techniques, link PD term structures of different risk classes and are compliant with accounting standards, e.g. offering the flexibility for business cycle-related variations.

Design/methodology/approach

The author modifies the non-homogeneous continuous-time Markov chain model (NHCTMCM) by Bluhm and Overbeck (2007a, 2007b) and introduces the generalized through-the-cycle model (GTTCM), which generalizes the homogeneous Markov chain approach to a point-in-time model. As part of the overall ECL estimation, an empirical study using Standard and Poor’s (S&P) transition data compares the performance of these models using the mean squared error.

Findings

The models can reflect observed PD term structures associated with different time periods. The modified NHCTMCM performs best at the expense of higher complexity and only its cumulative PD term structures can be transferred to valid ECL-relevant unconditional PD term structures. For direct calibration to these unconditional PD term structures, the GTTCM is only slightly worse. Moreover, it requires only half of the number of parameters that its competitor does. Both models are useful additions to the implementation of accounting regulations.

Research limitations/implications

The tests are only carried out for 15-year samples within a 35-year span of available S&P transition data. Furthermore, a point-in-time forecast of the PD term structure requires a link to the business cycle, which seems difficult to find, but is in principle necessary corresponding to the accounting requirements.

Practical implications

Research findings are useful for practitioners, who apply and develop the ECL models of financial accounting.

Originality/value

The innovative models expand upon the existing methodologies for assessing financial risks, motivated by the practical requirements of new financial accounting standards.

Details

The Journal of Risk Finance, vol. 22 no. 1
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 11 March 2019

Vivien Brunel

In machine learning applications, and in credit risk modeling in particular, model performance is usually measured by using cumulative accuracy profile (CAP) and receiving…

Abstract

Purpose

In machine learning applications, and in credit risk modeling in particular, model performance is usually measured by using cumulative accuracy profile (CAP) and receiving operating characteristic curves. The purpose of this paper is to use the statistics of the CAP curve to provide a new method for credit PD curves calibration that are not based on arbitrary choices as the ones that are used in the industry.

Design/methodology/approach

The author maps CAP curves to a ball–box problem and uses statistical physics techniques to compute the statistics of the CAP curve from which the author derives the shape of PD curves.

Findings

This approach leads to a new type of shape for PD curves that have not been considered in the literature yet, namely, the Fermi–Dirac function which is a two-parameter function depending on the target default rate of the portfolio and the target accuracy ratio of the scoring model. The author shows that this type of PD curve shape is likely to outperform the logistic PD curve that practitioners often use.

Practical implications

This paper has some practical implications for practitioners in banks. The author shows that the logistic function which is widely used, in particular in the field of retail banking, should be replaced by the Fermi–Dirac function. This has an impact on pricing, the granting policy and risk management.

Social implications

Measuring credit risk accurately benefits the bank of course and the customers as well. Indeed, granting is based on a fair evaluation of risk, and pricing is done accordingly. Additionally, it provides better tools to supervisors to assess the risk of the bank and the financial system as a whole through the stress testing exercises.

Originality/value

The author suggests that practitioners should stop using logistic PD curves and should adopt the Fermi–Dirac function to improve the accuracy of their credit risk measurement.

Details

The Journal of Risk Finance, vol. 20 no. 2
Type: Research Article
ISSN: 1526-5943

Keywords

Article
Publication date: 3 June 2021

Mariya Gubareva

This paper provides an objective approach based on available market information capable of reducing subjectivity, inherently present in the process of expected loss provisioning…

1148

Abstract

Purpose

This paper provides an objective approach based on available market information capable of reducing subjectivity, inherently present in the process of expected loss provisioning under the IFRS 9.

Design/methodology/approach

This paper develops the two-step methodology. Calibrating the Credit Default Swap (CDS)-implied default probabilities to the through-the-cycle default frequencies provides average weights of default component in the spread for each forward term. Then, the impairment provisions are calculated for a sample of investment grade and high yield obligors by distilling their pure default-risk term-structures from the respective term-structures of spreads. This research demonstrates how to estimate credit impairment allowances compliant with IFRS 9 framework.

Findings

This study finds that for both investment grade and high yield exposures, the weights of default component in the credit spreads always remain inferior to 33%. The research's outcomes contrast with several previous results stating that the default risk premium accounts at least for 40% of CDS spreads. The proposed methodology is applied to calculate IFRS 9 compliant provisions for a sample of investment grade and high yield obligors.

Research limitations/implications

Many issuers are not covered by individual Bloomberg valuation curves. However, the way to overcome this limitation is proposed.

Practical implications

The proposed approach offers a clue for a better alignment of accounting practices, financial regulation and credit risk management, using expected loss metrics across diverse silos inside organizations. It encourages adopting the proposed methodology, illustrating its application to a set of bond exposures.

Originality/value

No previous research addresses impairment provisioning employing Bloomberg valuation curves. The study fills this gap.

Article
Publication date: 12 October 2021

Gianluca Zanellato and Adriana Tiron-Tudor

The purpose of the research is to shed light on how the mandatory regulation on nonfinancial information has changed European state-owned enterprises' (SOEs) disclosure levels. In…

Abstract

Purpose

The purpose of the research is to shed light on how the mandatory regulation on nonfinancial information has changed European state-owned enterprises' (SOEs) disclosure levels. In addition, the present research aims to demonstrate, under the lens of legitimacy theory, how Hofstede's cultural dimensions shape social expectations that may have suffered changes after the introduction of a mandatory regulation on nonfinancial reporting.

Design/methodology/approach

The paper adopts a mixed approach. First, it employees the content analysis to investigate the disclosure level on 22 of the 24 European SOEs. Second, the authors demonstrate how cultural dimensions take a different role when a change in regulation is introduced using the qualitative comparative analysis (QCA).

Findings

The results reveal a slight increase in disclosure from the year before introducing the directive. Additionally, the results demonstrate how none of Hofstede's cultural dimensions is responsible for high disclosure levels. Although, the sufficiency analysis outlines several combinations of different cultural dimensions that lead to high disclosure levels. In particular, results demonstrate how the core dimensions leading to the outcome changed once the European Union Directive (EUD) has entered into force.

Research limitations/implications

Despite the contributions, the present study is not free of limitations. As the investigated sample is limited to a small number of SOEs, the content analysis adopts a dichotomous approach. The analysis is conducted on integrated reporting, and the fuzzy set QCA results cannot be used for generalization but refer only to the investigated sample. Consequently, further studies should investigate a broader sample of SOEs and organizations that adopt other nonfinancial reporting frameworks. Additionally, a qualitative approach to the reports' analysis is recommended.

Practical implications

It demonstrates how the EUD on nonfinancial information has impacted the disclosure levels of European SOEs. It adopts a fresh methodology rarely used in accounting. It demonstrates how cultural conditions influence social expectations that determine corporations to disclose more information after the introduction of a regulatory framework.

Originality/value

The paper's theoretical contribution refers to its focus on the public sector, and it adopts a methodology rarely used by accounting scholars.

Details

Journal of Applied Accounting Research, vol. 23 no. 1
Type: Research Article
ISSN: 0967-5426

Keywords

Article
Publication date: 17 April 2009

Andrew W. Lo

The purpose of this paper is to analyse regulatory reform in the wake of the financial crisis of 2007‐2008.

8580

Abstract

Purpose

The purpose of this paper is to analyse regulatory reform in the wake of the financial crisis of 2007‐2008.

Design/methodology/approach

The paper proposes a framework for regulatory reform that begins with the observation that financial manias and panics cannot be legislated away, and may be an unavoidable aspect of modern capitalism.

Findings

Financial crises are unavoidable when hardwired human behavior – fear and greed, or “animal spirits” – is combined with free enterprise, and cannot be legislated or regulated away. Like hurricanes and other forces of nature, market bubbles, and crashes cannot be entirely eliminated, but their most destructive consequences can be greatly mitigated with proper preparation. In fact, the most damaging effects of financial crisis come not from loss of wealth, but rather from those who are unprepared for such losses and panic in response. This perspective has several implications for the types of regulatory reform needed in the wake of the financial crisis of 2007‐2008, all centered around the need for greater transparency, improved measures of systemic risk, more adaptive regulations, including counter‐cyclical leverage constraints, and more emphasis on financial literacy starting in high school, including certifications for expertise in financial engineering for the senior management and directors of all financial institutions.

Originality/value

The paper stresses how we must resist the temptation to react too hastily to market events, and deliberate thoughtfully and broadly, instead, craft new regulations for the financial system of the twenty‐first century. Financial markets do not need more regulation; they need smarter and more effective regulation.

Details

Journal of Financial Economic Policy, vol. 1 no. 1
Type: Research Article
ISSN: 1757-6385

Keywords

Article
Publication date: 5 August 2019

Xin Gu, Qing Zhang and Erdogan Madenci

This paper aims to review the existing bond-based peridynamic (PD) and state-based PD heat conduction models, and further propose a refined bond-based PD thermal conduction model…

Abstract

Purpose

This paper aims to review the existing bond-based peridynamic (PD) and state-based PD heat conduction models, and further propose a refined bond-based PD thermal conduction model by using the PD differential operator.

Design/methodology/approach

The general refined bond-based PD is established by replacing the local spatial derivatives in the classical heat conduction equations with their corresponding nonlocal integral forms obtained by the PD differential operator. This modeling approach is representative of the state-based PD models, whereas the resulting governing equations appear as the bond-based PD models.

Findings

The refined model can be reduced to the existing bond-based PD heat conduction models by specifying particular influence functions. Also, the refined model does not require any calibration procedure unlike the bond-based PD. A systematic explicit dynamic solver is introduced to validate 1 D, 2 D and 3 D heat conduction in domains with and without a crack subjected to a combination of Dirichlet, Neumann and convection boundary conditions. All of the PD predictions are in excellent agreement with the classical solutions and demonstrate the nonlocal feature and advantage of PD in dealing with heat conduction in discontinuous domains.

Originality/value

The existing PD heat conduction models are reviewed. A refined bond-based PD thermal conduction model by using PD differential operator is proposed and 3 D thermal conduction in intact or cracked structures is simulated.

Details

Engineering Computations, vol. 36 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 25 April 2022

Ghada Talat Alhothali, Felix Mavondo and Islam Elgammal

In recent days, there has been an increasing interest towards achieving sustainable tourism objectives globally and specifically in Saudi Arabia. The benefits can be maximized if…

Abstract

Purpose

In recent days, there has been an increasing interest towards achieving sustainable tourism objectives globally and specifically in Saudi Arabia. The benefits can be maximized if the government is successful in attracting current pilgrims and influence their future intention to revisit the country as tourists. Hence, the purpose of this paper is to measure pilgrims’ revisit intentions to understand more about the possibility of their potential contribution towards the Saudi tourism and hospitality industry in the evolving circumstances.

Design/methodology/approach

This paper uses configuration theory to identify the “ideal” type of the pilgrims and compares this to the rest to establish if they differ and if that difference matters. Data were collected from 278 visitors to the Holy Mosque in Makkah, Saudi Arabia, to perform Umrah.

Findings

The findings show that a large deviation from the “ideal pilgrim” is negatively related to revisiting intentions and dissemination of positive word of mouth (PWOM).

Research limitations/implications

The development of profiles gives a better understanding of organizations or people across several dimensions looked at holistically. Fundamental to the theory is that there are only a limited number of configurations that achieve optimal performance (however defined).

Originality/value

The analytical approach adopted in this paper leads to achieving verbal and statistical correspondence in tests of “gestalts”. The interest is in establishing whether this difference matters to intentions to revisit and providing PWOM.

Details

Journal of Islamic Marketing, vol. 14 no. 6
Type: Research Article
ISSN: 1759-0833

Keywords

Book part
Publication date: 7 December 2016

Arch G. Woodside, Pedro Mir Bernal and Alicia Coduras

This chapter shows how to construct and test case-based macro models. The chapter makes use of national data to examine influences on quality-of-life of national cultures as…

Abstract

Synopsis

This chapter shows how to construct and test case-based macro models. The chapter makes use of national data to examine influences on quality-of-life of national cultures as complex wholes and entrepreneurship activities in Brazil, Russia, India, China, Germany, and the United States (the six focal nations) plus Denmark (a small-size, economically developed, nation). The study tests McClelland’s (1961) and more recent scholars’ proposition that some cultural configurations nurture entrepreneur startups, while other cultures are biased toward thwarting startups. The study applies complexity theory to develop and empirically test a general theory of cultures’, entrepreneurship’s, and innovation’s impact on quality-of-life across nations. Because culture represents a complex whole of attitudes, beliefs, values, and behavior, the study applies a set-theoretic approach to theory development and testing of alternative cultural configurations. Each of 28 economical developed and developing nations is scored for the level of the national cultures for each of six focal countries. The study selected for the study enables multi-way comparisons of culture-entrepreneurship-innovation-QOL among large- and small-sized developing and developed nations. The findings graphically present the complex national cultural configuration (x-axis) with entrepreneur nurture/thwart (y-axis) of the 28 nations compared to the six focal nations. The findings also include recognizing national cultures (e.g., Switzerland, the United States) nurturing entrepreneurial behavior versus other national cultures (e.g., Brazil and India) thwarting entrepreneurial behavior. The study concludes with a call to recognize the implicit shift in culturally implicit thinking and behavior necessary for advancing national platforms designed to successfully nurture entrepreneurship. Entrepreneur strategy implications include the observation that actions nurturing firm start-ups by nations low in entrepreneurship will unlikely to be successful without reducing such nations’ high levels of corruption.

Details

Case Study Research
Type: Book
ISBN: 978-1-78560-461-4

Keywords

Article
Publication date: 23 November 2022

Chetan Jalendra, B.K. Rout and Amol Marathe

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging…

Abstract

Purpose

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging due to transient disturbance. The transient disturbance causes vibration in the flexible object during robotic manipulation and assembly. This is an important problem as the quick suppression of undesired vibrations reduces the cycle time and increases the efficiency of the assembly process. Thus, this study aims to propose a contactless robot vision-based real-time active vibration suppression approach to handle such a scenario.

Design/methodology/approach

A robot-assisted camera calibration method is developed to determine the extrinsic camera parameters with respect to the robot position. Thereafter, an innovative robot vision method is proposed to identify a flexible beam grasped by the robot gripper using a virtual marker and obtain the dimension, tip deflection as well as velocity of the same. To model the dynamic behaviour of the flexible beam, finite element method (FEM) is used. The measured dimensions, tip deflection and velocity of a flexible beam are fed to the FEM model to predict the maximum deflection. The difference between the maximum deflection and static deflection of the beam is used to compute the maximum error. Subsequently, the maximum error is used in the proposed predictive maximum error-based second-stage controller to send the control signal for vibration suppression. The control signal in form of trajectory is communicated to the industrial robot controller that accommodates various types of delays present in the system.

Findings

The effectiveness and robustness of the proposed controller have been validated using simulation and experimental implementation on an Asea Brown Boveri make IRB 1410 industrial robot with a standard low frame rate camera sensor. In this experiment, two metallic flexible beams of different dimensions with the same material properties have been considered. The robot vision method measures the dimension within an acceptable error limit i.e. ±3%. The controller can suppress vibration amplitude up to approximately 97% in an average time of 4.2 s and reduces the stability time up to approximately 93% while comparing with control and without control suppression time. The vibration suppression performance is also compared with the results of classical control method and some recent results available in literature.

Originality/value

The important contributions of the current work are the following: an innovative robot-assisted camera calibration method is proposed to determine the extrinsic camera parameters that eliminate the need for any reference such as a checkerboard, robotic assembly, vibration suppression, second-stage controller, camera calibration, flexible beam and robot vision; an approach for robot vision method is developed to identify the object using a virtual marker and measure its dimension grasped by the robot gripper accommodating perspective view; the developed robot vision-based controller works along with FEM model of the flexible beam to predict the tip position and helps in handling different dimensions and material types; an approach has been proposed to handle different types of delays that are part of implementation for effective suppression of vibration; proposed method uses a low frame rate and low-cost camera for the second-stage controller and the controller does not interfere with the internal controller of the industrial robot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 7 June 2021

Xudong He, GuangYi Yang, E. Yang, Moli Zhang, Dan Luo, Jingjian Liu, Chongnan Zhao, Qinhua Chen and Fengying Ran

Based on DNase I and reduced graphene oxide (rGO)-magnetic silicon microspheres (MNPS), a highly sensitive and selective fluorescent probe for the detection of PD-L1 was developed.

Abstract

Purpose

Based on DNase I and reduced graphene oxide (rGO)-magnetic silicon microspheres (MNPS), a highly sensitive and selective fluorescent probe for the detection of PD-L1 was developed.

Design/methodology/approach

Here °C we present a feasibility of biosensor to detection of PD-L1 in lung tumors plasma. In the absence of PD-L1°C the PD-L1 aptamer is absorbed on the surface of graphene oxide modified magnetic nanoparticles °8rGO-MNPS°9 and leading to effective fluorescence quenching. Upon adding PD-L1°C the aptamer sequences could be specifically recognized by PD-L1 and the aptamer/PD-L1 complex is formed°C resulting in the recovery of quenched fluorescence.

Findings

This sensor can detect PD-L1 with a linear range from 100 pg mL−1 to 100 ng mL−1, and a detection limit of 10 pg•m−1 was achieved.

Originality/value

This method provides an easy and sensitive method for the detection of PD-L1 and will be beneficial to the early diagnosis and prognosis of tumors.

Details

Sensor Review, vol. 41 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of 187