Search results

1 – 10 of over 2000
Book part
Publication date: 2 November 2009

Adrian R. Fleissig and Gerald A. Whitney

A new nonparametric procedure is developed to evaluate the significance of violations of weak separability. The procedure correctly detects weak separability with high probability…

Abstract

A new nonparametric procedure is developed to evaluate the significance of violations of weak separability. The procedure correctly detects weak separability with high probability using simulated data that have violations of weak separability caused by adding measurement error. Results are not very sensitive when the amount of measurement error is miss-specified by the researcher. The methodology also correctly rejects weak separability for nonseparable simulated data. We fail to reject weak separability for a monetary and consumption data set that has violations of revealed preference, which suggests that measurement error may be the source of the observed violations.

Details

Measurement Error: Consequences, Applications and Solutions
Type: Book
ISBN: 978-1-84855-902-8

Open Access
Article
Publication date: 17 August 2020

Slavcho Shtrakov

In this paper we study a class of complexity measures, induced by a new data structure for representing k-valued functions (operations), called minor decision diagram. When…

Abstract

In this paper we study a class of complexity measures, induced by a new data structure for representing k-valued functions (operations), called minor decision diagram. When assigning values to some variables in a function the resulting functions are called subfunctions, and when identifying some variables the resulting functions are called minors. The sets of essential variables in subfunctions of f are called separable in f.

We examine the maximal separable subsets of variables and their conjugates, introduced in the paper, proving that each such set has at least one conjugate. The essential arity gap gap(f) of the function f is the minimal number of essential variables in f which become fictive when identifying distinct essential variables in f. We also investigate separable sets of variables in functions with non-trivial arity gap. This allows us to solve several important algebraic, computational and combinatorial problems about the finite-valued functions.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

Abstract

Details

Mathematical and Economic Theory of Road Pricing
Type: Book
ISBN: 978-0-08-045671-3

Article
Publication date: 29 July 2014

Yanfeng Xing and Yansong Wang

The purpose of this paper is to propose a new assembly variation analysis model to analyze assembly variation for sheet metal parts. The main focus is to analyze assembly…

1785

Abstract

Purpose

The purpose of this paper is to propose a new assembly variation analysis model to analyze assembly variation for sheet metal parts. The main focus is to analyze assembly processes based on the method of power balance.

Design/methodology/approach

Starting with issues in assembly variation analysis, the review shows the critical aspects of tolerance analysis. The method of influence coefficient (MIC) cannot accurately analyze the relationship between part variations and assembly variations, as the welding point is not a point but a small area. Therefore, new sensitivity matrices are generated based on the method of power balance.

Findings

Here two cases illustrate the processes of assembly variation analysis, and the results indicate that new method has higher accuracy than the MIC.

Research limitations/implications

This study is limited to assembly variation analysis for sheet metal parts, which can be used in auto-body and airplane body.

Originality/value

This paper provides a new assembly variation analysis based on the method of power balance.

Details

Assembly Automation, vol. 34 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 5 September 2016

Runhai Jiao, Shaolong Liu, Wu Wen and Biying Lin

The large volume of big data makes it impractical for traditional clustering algorithms which are usually designed for entire data set. The purpose of this paper is to focus on…

Abstract

Purpose

The large volume of big data makes it impractical for traditional clustering algorithms which are usually designed for entire data set. The purpose of this paper is to focus on incremental clustering which divides data into series of data chunks and only a small amount of data need to be clustered at each time. Few researches on incremental clustering algorithm address the problem of optimizing cluster center initialization for each data chunk and selecting multiple passing points for each cluster.

Design/methodology/approach

Through optimizing initial cluster centers, quality of clustering results is improved for each data chunk and then quality of final clustering results is enhanced. Moreover, through selecting multiple passing points, more accurate information is passed down to improve the final clustering results. The method has been proposed to solve those two problems and is applied in the proposed algorithm based on streaming kernel fuzzy c-means (stKFCM) algorithm.

Findings

Experimental results show that the proposed algorithm demonstrates more accuracy and better performance than streaming kernel stKFCM algorithm.

Originality/value

This paper addresses the problem of improving the performance of increment clustering through optimizing cluster center initialization and selecting multiple passing points. The paper analyzed the performance of the proposed scheme and proved its effectiveness.

Details

Kybernetes, vol. 45 no. 8
Type: Research Article
ISSN: 0368-492X

Keywords

Book part
Publication date: 30 December 2013

Pilar García-Gómez, Erik Schokkaert and Tom Van Ourti

Most politicians and ethical observers are not interested in pure health inequalities, as they want to distinguish between different causes of health differences. Measures of…

Abstract

Most politicians and ethical observers are not interested in pure health inequalities, as they want to distinguish between different causes of health differences. Measures of “unfair” inequality – direct unfairness and the fairness gap, but also the popular standardized concentration index (CI) – therefore neutralize the effects of what are considered to be “legitimate” causes of inequality. This neutralization is performed by putting a subset of the explanatory variables at reference values, for example, their means. We analyze how the inequality ranking of different policies depends on the specific choice of reference values. We show with mortality data from the Netherlands that the problem is empirically relevant and we suggest a statistical method for fixing the reference values.

Article
Publication date: 5 January 2010

Ron Layman, Samy Missoum and Jonathan Vande Geest

The use of stent‐grafts to canalize aortic blood flow for patients with aortic aneurysms is subject to serious failure mechanisms such as a leak between the stent‐graft and the…

Abstract

Purpose

The use of stent‐grafts to canalize aortic blood flow for patients with aortic aneurysms is subject to serious failure mechanisms such as a leak between the stent‐graft and the aorta (Type I endoleak). The purpose of this paper is to describe a novel computational approach to understand the influence of relevant variables on the occurrence of stent‐graft failure and quantify the probability of failure for aneurysm patients.

Design/methodology/approach

A parameterized fluid‐structure interaction finite element model of aortic aneurysm is built based on a multi‐material formulation available in LS‐DYNA. Probabilities of failure are assessed using an explicit construction of limit state functions with support vector machines (SVM) and uniform designs of experiments. The probabilistic approach is applied to two aneurysm geometries to provide a map of probabilities of failure for various design parameter values.

Findings

Parametric studies conducted in the course of this research successfully identified intuitive failure regions in the parameter space, and failure probabilities were calculated using both a simplified and more complex aneurysmal geometry.

Originality/value

This research introduces the use of SVM‐based explicit design space decomposition for probabilistic assessment applied to bioengineering problems. This technique allows one to efficiently calculate probabilities of failure. It is particularly suited for problems where outcomes can only be classified as safe or failed (e.g. leak or no leak). Finally, the proposed fluid‐structure interaction simulation accounts for the initiation of Type I endoleak between the graft and the aneurysm due to simultaneous fluid and solid forces.

Details

Engineering Computations, vol. 27 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 November 2023

Hao Xiang

It is of a great significance for the health monitoring of a liquid rocket engine to build an accurate and reliable fault prediction model. The thrust of a liquid rocket engine is…

Abstract

Purpose

It is of a great significance for the health monitoring of a liquid rocket engine to build an accurate and reliable fault prediction model. The thrust of a liquid rocket engine is an important indicator for its health monitoring. By predicting the changing value of the thrust, it can be judged whether the engine will fail at a certain time. However, the thrust is affected by various factors, and it is difficult to establish an accurate mathematical model. Thus, this study uses a mixture non-parametric regression prediction model to establish the model of the thrust for the health monitoring of a liquid rocket engine.

Design/methodology/approach

This study analyzes the characteristics of the least squares support vector regression (LS-SVR) machine . LS-SVR is suitable to model on the small samples and high dimensional data, but the performance of LS-SVR is greatly affected by its key parameters. Thus, this study implements the advanced intelligent algorithm, the real double-chain coding target gradient quantum genetic algorithm (DCQGA), to optimize these parameters, and the regression prediction model LSSVRDCQGA is proposed. Then the proposed model is used to model the thrust of a liquid rocket engine.

Findings

The simulation results show that: the average relative error (ARE) on the test samples is 0.37% when using LS-SVR, but it is 0.3186% when using LSSVRDCQGA on the same samples.

Practical implications

The proposed model of LSSVRDCQGA in this study is effective to the fault prediction on the small sample and multidimensional data, and has a certain promotion.

Originality/value

The original contribution of this study is to establish a mixture non-parametric regression prediction model of LSSVRDCQGA and properly resolve the problem of the health monitoring of a liquid rocket engine along with modeling the thrust of the engine by using LSSVRDCQGA.

Details

Journal of Quality in Maintenance Engineering, vol. 30 no. 1
Type: Research Article
ISSN: 1355-2511

Keywords

Book part
Publication date: 1 April 2003

Joel M Podolny and Greta Hsu

Sociologists have long recognized that stable patterns of exchange within a market depend on the ability of market actors to solve the problem of cooperation. Less well recognized…

Abstract

Sociologists have long recognized that stable patterns of exchange within a market depend on the ability of market actors to solve the problem of cooperation. Less well recognized and understood is a second problem that must be solved – the problem of Knightian uncertainty. This chapter posits that the problem of Knightian uncertainty occurs not only in the market; it underlies a variety of exchange contexts – not just markets, but art worlds and professions as well. These three exchange contexts are similar in so far as a generally accepted quality schema arises as an important solution to the problem of Knightian uncertainty; however, the quality schemas that arise in these three contexts differ systematically along two dimensions – the complexity of the schema and the extent to which the “non-producers” have a voice in the determination of the quality schema. By comparing and contrasting the way in which quality schemas arise in these three domains, this chapter (1) gives some specificity to the notion of quality as a social construction; (2) provides some preliminary insight into why a particular good or service becomes perceived as a market, artistic, or professional offering; and (3) offers an imagery for conceptualizing the mobility of goods and services between these three domains.

Details

The Governance of Relations in Markets and Organizations
Type: Book
ISBN: 978-1-84950-202-3

Article
Publication date: 1 February 1978

KATHERINE K. YUNKER

If everyone were indifferent between more and less and between this and that, the problems of allocating scarce resources would be trivialized. The necessity of choice, whether…

Abstract

If everyone were indifferent between more and less and between this and that, the problems of allocating scarce resources would be trivialized. The necessity of choice, whether social or individual, would seem absurd. However, people persist in preferring certain “states of the world” to others. As a society is made up of individuals, it seems reasonable that a society's preferences should be “made up” of the preferences of its members. Therefore, any social welfare function, W, should be a function of the individual welfare functions, wi. That is,

Details

Studies in Economics and Finance, vol. 2 no. 2
Type: Research Article
ISSN: 1086-7376

1 – 10 of over 2000