Search results

1 – 10 of over 10000
Book part
Publication date: 14 July 2006

Duangkamon Chotikapanich and William E. Griffiths

Hypothesis tests for dominance in income distributions has received considerable attention in recent literature. See, for example, Barrett and Donald (2003a, b), Davidson and…

Abstract

Hypothesis tests for dominance in income distributions has received considerable attention in recent literature. See, for example, Barrett and Donald (2003a, b), Davidson and Duclos (2000) and references therein. Such tests are useful for assessing progress towards eliminating poverty and for evaluating the effectiveness of various policy initiatives directed towards welfare improvement. To date the focus in the literature has been on sampling theory tests. Such tests can be set up in various ways, with dominance as the null or alternative hypothesis, and with dominance in either direction (X dominates Y or Y dominates X). The result of a test is expressed as rejection of, or failure to reject, a null hypothesis. In this paper, we develop and apply Bayesian methods of inference to problems of Lorenz and stochastic dominance. The result from a comparison of two income distributions is reported in terms of the posterior probabilities for each of the three possible outcomes: (a) X dominates Y, (b) Y dominates X, and (c) neither X nor Y is dominant. Reporting results about uncertain outcomes in terms of probabilities has the advantage of being more informative than a simple reject/do-not-reject outcome. Whether a probability is sufficiently high or low for a policy maker to take a particular action is then a decision for that policy maker.

The methodology is applied to data for Canada from the Family Expenditure Survey for the years 1978 and 1986. We assess the likelihood of dominance from one time period to the next. Two alternative assumptions are made about the income distributions – Dagum and Singh-Maddala – and in each case the posterior probability of dominance is given by the proportion of times a relevant parameter inequality is satisfied by the posterior observations generated by Markov chain Monte Carlo.

Details

Dynamics of Inequality and Poverty
Type: Book
ISBN: 978-0-76231-350-1

Article
Publication date: 23 August 2022

Kamlesh Kumar Pandey and Diwakar Shukla

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness…

Abstract

Purpose

The K-means (KM) clustering algorithm is extremely responsive to the selection of initial centroids since the initial centroid of clusters determines computational effectiveness, efficiency and local optima issues. Numerous initialization strategies are to overcome these problems through the random and deterministic selection of initial centroids. The random initialization strategy suffers from local optimization issues with the worst clustering performance, while the deterministic initialization strategy achieves high computational cost. Big data clustering aims to reduce computation costs and improve cluster efficiency. The objective of this study is to achieve a better initial centroid for big data clustering on business management data without using random and deterministic initialization that avoids local optima and improves clustering efficiency with effectiveness in terms of cluster quality, computation cost, data comparisons and iterations on a single machine.

Design/methodology/approach

This study presents the Normal Distribution Probability Density (NDPD) algorithm for big data clustering on a single machine to solve business management-related clustering issues. The NDPDKM algorithm resolves the KM clustering problem by probability density of each data point. The NDPDKM algorithm first identifies the most probable density data points by using the mean and standard deviation of the datasets through normal probability density. Thereafter, the NDPDKM determines K initial centroid by using sorting and linear systematic sampling heuristics.

Findings

The performance of the proposed algorithm is compared with KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms through Davies Bouldin score, Silhouette coefficient, SD Validity, S_Dbw Validity, Number of Iterations and CPU time validation indices on eight real business datasets. The experimental evaluation demonstrates that the NDPDKM algorithm reduces iterations, local optima, computing costs, and improves cluster performance, effectiveness, efficiency with stable convergence as compared to other algorithms. The NDPDKM algorithm minimizes the average computing time up to 34.83%, 90.28%, 71.83%, 92.67%, 69.53% and 76.03%, and reduces the average iterations up to 40.32%, 44.06%, 32.02%, 62.78%, 19.07% and 36.74% with reference to KM, KM++, Var-Part, Murat-KM, Mean-KM and Sort-KM algorithms.

Originality/value

The KM algorithm is the most widely used partitional clustering approach in data mining techniques that extract hidden knowledge, patterns and trends for decision-making strategies in business data. Business analytics is one of the applications of big data clustering where KM clustering is useful for the various subcategories of business analytics such as customer segmentation analysis, employee salary and performance analysis, document searching, delivery optimization, discount and offer analysis, chaplain management, manufacturing analysis, productivity analysis, specialized employee and investor searching and other decision-making strategies in business.

Article
Publication date: 23 January 2019

Rakesh Ranjan, Subrata Kumar Ghosh and Manoj Kumar

The probability distribution of major length and aspect ratio (major length/minor length) of wear debris collected from gear oil used in planetary gear drive were analysed and…

Abstract

Purpose

The probability distribution of major length and aspect ratio (major length/minor length) of wear debris collected from gear oil used in planetary gear drive were analysed and modelled. The paper aims to find an appropriate probability distribution model to forecast the kind of wear particles at different running hour of the machine.

Design/methodology/approach

Used gear oil of the planetary gear box of a slab caster was drained out and charged with a fresh oil of grade (EP-460). Six chronological oil samples were collected at different time interval between 480 and 1,992 h of machine running. The oil samples were filtered to separate wear particles, and microscopic study of wear debris was carried out at 100X magnification. Statistical modelling of wear debris distribution was done using Weibull and exponential probability distribution model. A comparison was studied among actual, Weibull and exponential probability distribution of major length and aspect ratio of wear particles.

Findings

Distribution of major length of wear particle was found to be closer to the exponential probability density function, whereas Weibull probability density function fitted better to distribution of aspect ratio of wear particle.

Originality/value

The potential of the developed model can be used to analyse the distribution of major length and aspect ratio of wear debris present in planetary gear box of slab caster machine.

Details

Industrial Lubrication and Tribology, vol. 71 no. 2
Type: Research Article
ISSN: 0036-8792

Keywords

Article
Publication date: 5 February 2018

Damaris Serigatto Vicentin, Brena Bezerra Silva, Isabela Piccirillo, Fernanda Campos Bueno and Pedro Carlos Oprime

The purpose of this paper is to develop a monitoring multiple-stream processes control chart with a finite mixture of probability distributions in the manufacture industry.

Abstract

Purpose

The purpose of this paper is to develop a monitoring multiple-stream processes control chart with a finite mixture of probability distributions in the manufacture industry.

Design/methodology/approach

Data were collected during production of a wheat-based dough in a food industry and the control charts were developed with these steps: to collect the master sample from different production batches; to verify, by graphical methods, the quantity and the characterization of the number of mixing probability distributions in the production batch; to adjust the theoretical model of probability distribution of each subpopulation in the production batch; to make a statistical model considering the mixture distribution of probability and assuming that the statistical parameters are unknown; to determine control limits; and to compare the mixture chart with traditional control chart.

Findings

A graph was developed for monitoring a multi-stream process composed by some parameters considered in its calculation with similar efficiency to the traditional control chart.

Originality/value

The control chart can be an efficient tool for customers that receive product batches continuously from a supplier and need to monitor statistically the critical quality parameters.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 1 August 2005

Degan Zhang, Guanping Zeng, Enyi Chen and Baopeng Zhang

Active service is one of key problems of ubiquitous computing paradigm. Context‐aware computing is helpful to carry out this service. Because the context is changing with the…

Abstract

Active service is one of key problems of ubiquitous computing paradigm. Context‐aware computing is helpful to carry out this service. Because the context is changing with the movement or shift of the user, its uncertainty often exists. Context‐aware computing with uncertainty includes obtaining context information, forming model, fusing of aware context and managing context information. In this paper, we focus on modeling and computing of aware context information with uncertainty for making dynamic decision during seamless mobility. Our insight is to combine dynamic context‐aware computing with improved Random Set Theory (RST) and extended D‐S Evidence Theory (EDS). We re‐examine formalism of random set, argue the limitations of the direct numerical approaches, give new modeling mode based on RST for aware context and propose our computing approach of modeled aware context.In addition, we extend classic D‐S Evidence Theory after considering context’s reliability, time‐efficiency and relativity, compare relative computing methods. After enumerating experimental examples of our active space, we provide the evaluation. By comparisons, the validity of new context‐aware computing approach based on RST or EDS for ubiquitous active service with uncertainty information has been successfully tested.

Details

International Journal of Pervasive Computing and Communications, vol. 1 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Book part
Publication date: 24 January 2022

Eleonora Pantano and Kim Willems

Determining the right number of customers inside a store (i.e. human or customer density) plays a crucial role in retail management strategies. On the one hand, retailers want to…

Abstract

Determining the right number of customers inside a store (i.e. human or customer density) plays a crucial role in retail management strategies. On the one hand, retailers want to maximize the number of visitors they attract in order to optimize returns and profits. On the other hand, ensuring a pleasurable, efficient and COVID-19-proof shopping experience, would go against an excessive concentration of shoppers. Fulfilling both retailer and consumer perspectives requires a delicate balance to be struck. This chapter aims at supporting retailers in making informed decisions, by clarifying the extent to which store layouts influence (perceived) consumer density. Specifically, the chapter illustrates how new technologies and methodologies (i.e. agent-based simulation) can help in predicting a store layout's ability to reduce consumers' perceived in-store spatial density and related perceptions of human crowding, while also ensuring a certain retailers' profitability.

Article
Publication date: 2 March 2012

G. Mora and J.C. Navarro

In this article the aim is to propose a new form to densify parallelepipeds of RN by sequences of α‐dense curves with accumulated densities.

Abstract

Purpose

In this article the aim is to propose a new form to densify parallelepipeds of RN by sequences of α‐dense curves with accumulated densities.

Design/methodology/approach

This will be done by using a basic α‐densification technique and adding the new concept of sequence of α‐dense curves with accumulated density to improve the resolution of some global optimization problems.

Findings

It is found that the new technique based on sequences of α‐dense curves with accumulated densities allows to simplify considerably the process consisting on the exploration of the set of optimizer points of an objective function with feasible set a parallelepiped K of RN. Indeed, since the sequence of the images of the curves of a sequence of α‐dense curves with accumulated density is expansive, in each new step of the algorithm it is only necessary to explore a residual subset. On the other hand, since the sequence of their densities is decreasing and tends to zero, the convergence of the algorithm is assured.

Practical implications

The results of this new technique of densification by sequences of α‐dense curves with accumulated densities will be applied to densify the feasible set of an objective function which minimizes the quadratic error produced by the adjustment of a model based on a beta probability density function which is largely used in studies on the transition‐time of forest vegetation.

Originality/value

A sequence of α‐dense curves with accumulated density represents an original concept to be added to the set of techniques to optimize a multivariable function by the reduction to only one variable as a new application of α‐dense curves theory to the global optimization.

Article
Publication date: 18 July 2019

Zahid Hussain Hulio and Wei Jiang

The purpose of this paper is to investigate wind power potential of site using wind speed, wind direction and other meteorological data including temperature and air density

Abstract

Purpose

The purpose of this paper is to investigate wind power potential of site using wind speed, wind direction and other meteorological data including temperature and air density collected over a period of one year.

Design/methodology/approach

The site-specific air density, wind shear, wind power density, annual energy yield and capacity factors have been calculated at 30 and 10 m above the ground level (AGL). The Weibull parameters have been calculated using empirical, maximum likelihood, modified maximum likelihood, energy pattern and graphical methods to determine the other dependent parameters. The accuracies of these methods are determined using correlation coefficient (R²) and root mean square error (RMSE) values.

Findings

The site-specific wind shear coefficient was found to be 0.18. The annual mean wind speeds were found to be 5.174 and 4.670 m/s at 30 and 10 m heights, respectively, with corresponding standard deviations of 2.085 and 2.059. The mean wind power densities were found to be 59.50 and 46.75 W/m² at 30 and 10 m heights, respectively. According to the economic assessment, the wind turbine A is capable of producing wind energy at the lowest value of US$ 0.034/kWh.

Practical implications

This assessment provides the sustainable solution of energy which minimizes the dependence on continuous supply of oil and gas to run the conventional power plants that is a major cause of increasing load shedding in the significant industrial and thickly populated city of Pakistan. Also, this will minimize the quarrel between the local power producer and oil and gas supplier during the peak season.

Social implications

This wind resource assessment has some important social implications including decreasing the environmental issues, enhancing the uninterrupted supply of electricity and decreasing cost of energy per kWh for the masses of Karachi.

Originality/value

The results are showing that the location can be used for installing the wind energy power plant at the lower cost per kWh compared to other energy sources. The wind energy is termed as sustainable solution at the lowest cost.

Details

International Journal of Energy Sector Management, vol. 14 no. 1
Type: Research Article
ISSN: 1750-6220

Keywords

Content available
Article
Publication date: 17 October 2023

Zhixun Wen, Fei Li and Ming Li

The purpose of this paper is to apply the concept of equivalent initial flaw size (EIFS) to the anisotropic nickel-based single crystal (SX) material, and to predict the fatigue…

Abstract

Purpose

The purpose of this paper is to apply the concept of equivalent initial flaw size (EIFS) to the anisotropic nickel-based single crystal (SX) material, and to predict the fatigue life on this basis. The crack propagation law of SX material at different temperatures and the weak correlation of EIFS values verification under different loading conditions are also investigated.

Design/methodology/approach

A three-parameter time to crack initial (TTCI) method with multiple reference crack lengths under different loading conditions is established, which include the TTCI backstepping method and EIFS fitting method. Subsequently, the optimized EIFS distribution is obtained based on the random crack propagation rate and maximum likelihood estimation of median fatigue life. Then, an effective driving force based on anisotropic and mixed crack propagation mode is proposed to describe the crack propagation rate in the small crack stage. Finally, the fatigue life of three different temperature ESE(T) standard specimens is predicted based on the EIFS values under different survival rates.

Findings

The optimized EIFS distribution based on EIFS fitting - maximum likelihood estimation (MLE) method has the highest accuracy in predicting the total fatigue life, with the range of EIFS values being about [0.0028, 0.0875] (mm), and the mean value of EIFS being 0.0506 mm. The error between the predicted fatigue life based on the crack propagation rate and EIFS distribution for survival rates ranges from 5% to 95% and the experimental life is within two times dispersion band.

Originality/value

This paper systematically proposes a new anisotropic material EIFS prediction method, establishing a framework for predicting the fatigue life of SX material at different temperatures using fracture mechanics to avoid inaccurate anisotropic constitutive models and fatigue damage accumulation theory.

Details

Multidiscipline Modeling in Materials and Structures, vol. 19 no. 6
Type: Research Article
ISSN: 1573-6105

Keywords

Article
Publication date: 1 April 1993

Guy Jumarie

The complexity of a general system is identified with its temperature and, analogously with Boltzmann's probability density in thermodynamics, this temperature is related to the…

Abstract

The complexity of a general system is identified with its temperature and, analogously with Boltzmann's probability density in thermodynamics, this temperature is related to the informational entropy of the system. The concept of informational entropy of deterministic functions provides a straightforward modelling of Brillouin's negentropy (negative entropy), therefore a system can be characterized by its complexity and its dual complexity. States composition laws for complexities expressed in terms of Shannonian entropy with or without probability, and then the approach is extended to quantum entropy of non‐probabilistic data. Outlines some suggestions for future investigation.

Details

Kybernetes, vol. 22 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 10000