Search results

1 – 10 of 886
Article
Publication date: 11 March 2019

Vivien Brunel

In machine learning applications, and in credit risk modeling in particular, model performance is usually measured by using cumulative accuracy profile (CAP) and receiving…

Abstract

Purpose

In machine learning applications, and in credit risk modeling in particular, model performance is usually measured by using cumulative accuracy profile (CAP) and receiving operating characteristic curves. The purpose of this paper is to use the statistics of the CAP curve to provide a new method for credit PD curves calibration that are not based on arbitrary choices as the ones that are used in the industry.

Design/methodology/approach

The author maps CAP curves to a ball–box problem and uses statistical physics techniques to compute the statistics of the CAP curve from which the author derives the shape of PD curves.

Findings

This approach leads to a new type of shape for PD curves that have not been considered in the literature yet, namely, the Fermi–Dirac function which is a two-parameter function depending on the target default rate of the portfolio and the target accuracy ratio of the scoring model. The author shows that this type of PD curve shape is likely to outperform the logistic PD curve that practitioners often use.

Practical implications

This paper has some practical implications for practitioners in banks. The author shows that the logistic function which is widely used, in particular in the field of retail banking, should be replaced by the Fermi–Dirac function. This has an impact on pricing, the granting policy and risk management.

Social implications

Measuring credit risk accurately benefits the bank of course and the customers as well. Indeed, granting is based on a fair evaluation of risk, and pricing is done accordingly. Additionally, it provides better tools to supervisors to assess the risk of the bank and the financial system as a whole through the stress testing exercises.

Originality/value

The author suggests that practitioners should stop using logistic PD curves and should adopt the Fermi–Dirac function to improve the accuracy of their credit risk measurement.

Details

The Journal of Risk Finance, vol. 20 no. 2
Type: Research Article
ISSN: 1526-5943

Keywords

Open Access
Article
Publication date: 8 March 2022

Riyajur Rahman and Nipen Saikia

Let p[1,r;t] be defined by

Abstract

Purpose

Let p[1,r;t] be defined by n=0p[1,r;t](n)qn=(E1Er)t, where t is a non-zero rational number, r ≥ 1 is an integer and Er=n=0(1qr(n+1)) for |q| < 1. The function p[1,r;t](n) is the generalisation of the two-colour partition function p[1,r;−1](n). In this paper, the authors prove some new congruences modulo odd prime by taking r = 5, 7, 11 and 13, and non-integral rational values of t.

Design/methodology/approach

Using q-series expansion/identities, the authors established general congruence modulo prime number for two-colour partition function.

Findings

In the paper, the authors study congruence properties of two-colour partition function for fractional values. The authors also give some particular cases as examples.

Originality/value

The partition functions for fractional value is studied in 2019 by Chan and Wang for Ramanujan's general partition function and then extended by Xia and Zhu in 2020. In 2021, Baruah and Das also proved some congruences related to fractional partition functions previously investigated by Chan and Wang. In this sequel, some congruences are proved for two-colour partitions in this paper. The results presented in the paper are original.

Details

Arab Journal of Mathematical Sciences, vol. 29 no. 2
Type: Research Article
ISSN: 1319-5166

Keywords

Article
Publication date: 8 February 2016

Yossi Hadad and Baruch Keren

The purpose of this paper is to propose a method to determine the optimal number of operators to be assigned to a given number of machines, as well as the number of machines that…

Abstract

Purpose

The purpose of this paper is to propose a method to determine the optimal number of operators to be assigned to a given number of machines, as well as the number of machines that will be run by each operator (a numerical partition). This determination should be made with the objective of minimizing production costs or maximizing profits.

Design/methodology/approach

The method calculates the machines interference rate via the binomial distribution function. The optimal assignment is calculated by transformation of a partition problem into a problem of finding the shortest path on a directed acyclic graph.

Findings

The method enables the authors to calculate the adjusted cycle time, the workload of the operators, and the utility of the machines, as well as the production yield, the total cost per unit, and the hourly profit for each potential assignment of operators to machines. In a case study, the deviation of the output per hour of the proposed method from the actual value was about 2 percent.

Practical implications

The paper provides formulas and tables that give machine interference rates through the application of binomial distribution. The practicability of the proposed method is demonstrated by a real-life case study.

Originality/value

The method can be applied in a wide variety of manufacturing systems that use many identical machines. This includes tire presses in tire manufacturing operations, ovens in pastry manufacturing systems, textile machines, and so on.

Details

International Journal of Productivity and Performance Management, vol. 65 no. 2
Type: Research Article
ISSN: 1741-0401

Keywords

Article
Publication date: 10 February 2023

Rokhsaneh Yousef Zehi and Noor Saifurina Nana Khurizan

Uncertainty in data, whether in real-valued or integer-valued data, may result in infeasible optimal solutions or unreliable efficiency scores and ranking of decision-making…

Abstract

Purpose

Uncertainty in data, whether in real-valued or integer-valued data, may result in infeasible optimal solutions or unreliable efficiency scores and ranking of decision-making units. To handle the uncertainty in integer-valued factors in data envelopment analysis (DEA) models, this study aims to propose a robust DEA model which is applicable in the presence of such factors.

Design/methodology/approach

This research focuses on the application of fuzzy interpretation of efficiency to a mixed-integer DEA (MIDEA) model. The robust optimization approach is used to address the uncertain integer-valued parameters in the proposed MIDEA model.

Findings

In this study, the authors proposed an MIDEA model without any equality constraint to avoid the arise problem by such constraints in the construction of the robust counterpart of the conventional MIDEA models. We have studied the characteristics and conditions for constructing the uncertainty set with uncertain integer-valued parameters and a robust MIDEA model is proposed under a combined box-polyhedral uncertainty set. The applicability of the developed models is shown in a case study of Malaysian public universities.

Originality/value

This study develops an MIDEA model equivalent to the conventional MIDEA model excluding any equality constraint which is crucial in robust approach to avoid restricted feasible region or infeasible solutions. This study proposes a robust DEA approach which is applicable in cases with uncertain integer-valued parameters, unlike previous studies in robust DEA field where uncertain parameters are generally assumed to be only real-valued.

Details

Journal of Modelling in Management, vol. 19 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 10 July 2017

Abdelrahman E.E. Eltoukhy, Felix T.S. Chan and S.H. Chung

The purpose of this paper is twofold: first to carry out a comprehensive literature review for state of the art regarding airline schedule planning and second to identify some new…

2734

Abstract

Purpose

The purpose of this paper is twofold: first to carry out a comprehensive literature review for state of the art regarding airline schedule planning and second to identify some new research directions that might help academic researchers and practitioners.

Design/methodology/approach

The authors mainly focus on the research work appeared in the last three decades. The search process was conducted in database searches using four keywords: “Flight scheduling,” “Fleet assignment,” “Aircraft maintenance routing” (AMR), and “Crew scheduling”. Moreover, the combination of the keywords was used to find the integrated models. Any duplications due to database variety and the articles that were written in non-English language were discarded.

Findings

The authors studied 106 research papers and categorized them into five categories. In addition, according to the model features, subcategories were further identified. Moreover, after discussing up-to-date research work, the authors suggested some future directions in order to contribute to the existing literature.

Research limitations/implications

The presented categories and subcategories were based on the model characteristics rather than the model formulation and solution methodology that are commonly used in the literature. One advantage of this classification is that it might help scholars to deeply understand the main variation between the models. On the other hand, identifying future research opportunities should help academic researchers and practitioners to develop new models and improve the performance of the existing models.

Practical implications

This study proposed some considerations in order to enhance the efficiency of the schedule planning process practically, for example, using the dynamic Stackelberg game strategy for market competition in flight scheduling, considering re-fleeting mechanism under heterogeneous fleet for fleet assignment, and considering the stochastic departure and arrival times for AMR.

Originality/value

In the literature, all the review papers focused only on one category of the five categories. Then, this category was classified according to the model formulation and solution methodology. However, in this work, the authors attempted to propose a comprehensive review for all categories for the first time and develop new classifications for each category. The proposed classifications are hence novel and significant.

Details

Industrial Management & Data Systems, vol. 117 no. 6
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 29 July 2014

Kanchan Jain, Isha Dewan and Monika Rani

Joint reliability importance (JRI) of components is the effect of a change of their reliability on the system reliability. The authors consider two coherent multi-component…

Abstract

Purpose

Joint reliability importance (JRI) of components is the effect of a change of their reliability on the system reliability. The authors consider two coherent multi-component systems – a series-in-parallel (series subsystems arranged in parallel) and a parallel-in-series (parallel subsystems arranged in series) system. It is assumed that all the components in the subsystems are independent but not identically distributed. The subsystems do not have any component in common. The paper aims to discuss these issues.

Design/methodology/approach

For both the systems, the expressions for the JRI of two or more components are derived. The results are extended to include subsystems where some of the components are replicated.

Findings

The findings are illustrated by considering bridge structure as a series-in-parallel system wherein some of the components are repeated in different subsystems. Numerical results have also been provided for a series-in-parallel system with unreplicated components. JRI for various combinations of components for both the illustrations are given through tables or figures.

Originality/value

Chang and Jan (2006) and Gao et al. (2007) found the JRI of two components of series-in-parallel system when the components are identical and independently distributed. The authors derive the JRI of m=2 components for series-in-parallel and parallel-in-series systems when components are independent but need not be identically distributed. Expressions are obtained for the above-mentioned systems with replicated and unreplicated components in different subsystems. These results will be useful in analyzing the joint effect of reliability of several components on the system reliability. This will be of value to design engineers for designing systems that function more effectively and for a longer duration.

Details

International Journal of Quality & Reliability Management, vol. 31 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 23 November 2020

Ana Camila Ferreira Mamede, José Roberto Camacho, Rui Esteves Araújo and Igor Santos Peretta

The purpose of this paper is to present the Moore-Penrose pseudoinverse (PI) modeling and compare with artificial neural network (ANN) modeling for switched reluctance machine…

Abstract

Purpose

The purpose of this paper is to present the Moore-Penrose pseudoinverse (PI) modeling and compare with artificial neural network (ANN) modeling for switched reluctance machine (SRM) performance.

Design/methodology/approach

In a design of an SRM, there are a number of parameters that are chosen empirically inside a certain interval, therefore, to find an optimal geometry it is necessary to define a good model for SRM. The proposed modeling uses the Moore-Penrose PI for the resolution of linear systems and finite element simulation data. To attest to the quality of PI modeling, a model using ANN is established and the two models are compared with the values determined by simulations of finite elements.

Findings

The proposed PI model showed better accuracy, generalization capacity and lower computational cost than the ANN model.

Originality/value

The proposed approach can be applied to any problem as long as experimental/computational results can be obtained and will deliver the best approximation model to the available data set.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 39 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 22 September 2020

Seenu N., Kuppan Chetty R.M., Ramya M.M. and Mukund Nilakantan Janardhanan

This paper aims to present a concise review on the variant state-of-the-art dynamic task allocation strategies. It presents a thorough discussion about the existing dynamic task…

2316

Abstract

Purpose

This paper aims to present a concise review on the variant state-of-the-art dynamic task allocation strategies. It presents a thorough discussion about the existing dynamic task allocation strategies mainly with respect to the problem application, constraints, objective functions and uncertainty handling methods.

Design/methodology/approach

This paper briefs the introduction of multi-robot dynamic task allocation problem and discloses the challenges that exist in real-world dynamic task allocation problems. Numerous task allocation strategies are discussed in this paper, and it establishes the characteristics features between them in a qualitative manner. This paper also exhibits the existing research gaps and conducive future research directions in dynamic task allocation for multiple mobile robot systems.

Findings

This paper concerns the objective functions, robustness, task allocation time, completion time, and task reallocation feature for performance analysis of different task allocation strategies. It prescribes suitable real-world applications for variant task allocation strategies and identifies the challenges to be resolved in multi-robot task allocation strategies.

Originality/value

This paper provides a comprehensive review of dynamic task allocation strategies and incites the salient research directions to the researchers in multi-robot dynamic task allocation problems. This paper aims to summarize the latest approaches in the application of exploration problems.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 15 June 2010

Li Shuiping and Wan Xiaoxue

The purpose of this paper is to find a global method for the limited K‐partitioning of hypergraphs representing optimal design problems in complex machine systems.

Abstract

Purpose

The purpose of this paper is to find a global method for the limited K‐partitioning of hypergraphs representing optimal design problems in complex machine systems.

Design/methodology/approach

To represent some real design considerations, a new concept of semi‐free hypergraphs is proposed and a method to apply semi‐free hypergraphs to the decomposition of complex design problems based on optimal models is also suggested. On this basis, the limited K‐partitioning problem of semi‐free hypergraphs and its partitioning objective for the optimal design of complex machines is presented. A global method based on genetic algorithms, GALKP, for the limited K‐partitioning of semi‐free hypergraphs is also proposed. Finally, a case study is presented in detail.

Findings

Semi‐free hypergraphs are a more powerful tool to map a complex engineering design problem. The decomposition of complex design problems may be converted to a limited K‐partitioning problem of semi‐free hypergraphs. The algorithm presented in this paper for the limited K‐partitioning of semi‐free hypergraphs is fast, effective, and powerful.

Research limitations/implications

The traditional methods based on hypergraphs have some limitations while applied to the decomposition of some complex problems such as the design of large‐scale machine systems. The proposed method is helpful to solve similar engineering design problems.

Practical implications

The paper illustrates a faster and more effective method to implement the decomposition of large‐scale optimal design problems in complex machine systems.

Originality/value

This paper shows a new way to solve the complex engineering design problems based on semi‐free hypergraphs and its K‐partitioning method.

Details

Kybernetes, vol. 39 no. 6
Type: Research Article
ISSN: 0368-492X

Keywords

Open Access
Article
Publication date: 7 July 2022

Sirilak Ketchaya and Apisit Rattanatranurak

Sorting is a very important algorithm to solve problems in computer science. The most well-known divide and conquer sorting algorithm is quicksort. It starts with dividing the…

1247

Abstract

Purpose

Sorting is a very important algorithm to solve problems in computer science. The most well-known divide and conquer sorting algorithm is quicksort. It starts with dividing the data into subarrays and finally sorting them.

Design/methodology/approach

In this paper, the algorithm named Dual Parallel Partition Sorting (DPPSort) is analyzed and optimized. It consists of a partitioning algorithm named Dual Parallel Partition (DPPartition). The DPPartition is analyzed and optimized in this paper and sorted with standard sorting functions named qsort and STLSort which are quicksort, and introsort algorithms, respectively. This algorithm is run on any shared memory/multicore systems. OpenMP library which supports multiprocessing programming is developed to be compatible with C/C++ standard library function. The authors’ algorithm recursively divides an unsorted array into two halves equally in parallel with Lomuto's partitioning and merge without compare-and-swap instructions. Then, qsort/STLSort is executed in parallel while the subarray is smaller than the sorting cutoff.

Findings

In the authors’ experiments, the 4-core Intel i7-6770 with Ubuntu Linux system is implemented. DPPSort is faster than qsort and STLSort up to 6.82× and 5.88× on Uint64 random distributions, respectively.

Originality/value

The authors can improve the performance of the parallel sorting algorithm by reducing the compare-and-swap instructions in the algorithm. This concept can be used to develop related problems to increase speedup of algorithms.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

1 – 10 of 886