Search results

1 – 10 of over 47000
Article
Publication date: 2 January 2018

Alexander Zemliak

This paper aims to propose a new approach on the problem of circuit optimisation by using the generalised optimisation methodology presented earlier. This approach is focused on…

Abstract

Purpose

This paper aims to propose a new approach on the problem of circuit optimisation by using the generalised optimisation methodology presented earlier. This approach is focused on the application of the maximum principle of Pontryagin for searching the best structure of a control vector providing the minimum central processing unit (CPU) time.

Design/methodology/approach

The process of circuit optimisation is defined mathematically as a controllable dynamical system with a control vector that changes the internal structure of the equations of the optimisation procedure. In this case, a well-known maximum principle of Pontryagin is the best theoretical approach for finding of the optimum structure of control vector. A practical approach for the realisation of the maximum principle is based on the analysis of the behaviour of a Hamiltonian for various strategies of optimisation and provides the possibility to find the optimum points of switching for the control vector.

Findings

It is shown that in spite of the fact that the maximum principle is not a sufficient condition for obtaining the global minimum for the non-linear problem, the decision can be obtained in the form of local minima. These local minima provide rather a low value of the CPU time. Numerical results were obtained for both a two-dimensional case and an N-dimensional case.

Originality/value

The possibility of the use of the maximum principle of Pontryagin to a problem of circuit optimisation is analysed systematically for the first time. The important result is the theoretical justification of formerly discovered effect of acceleration of the process of circuit optimisation.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 37 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 April 1998

Guy Jumarie

In the information theoretic framework, it is customary to address the problem of defining and analyzing complexity and organization of systems either by using Shannon entropy…

Abstract

In the information theoretic framework, it is customary to address the problem of defining and analyzing complexity and organization of systems either by using Shannon entropy, via Jaynes maximum entropy principle, or by means of the so‐called Kullback informational divergence which measures the informational distance between two probability distributions. In the present paper, it is shown that the so‐called self‐divergence of Markovian processes can be a useful complement in this approach. After a short background on entropy and organization, we recall the definition of divergence of Markovian processes, and then it is used to analyze organization and complexity. We arrive at a principle of maximum self‐divergence which characterizes systems with maximum organization.

Details

Kybernetes, vol. 27 no. 3
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 3 May 2016

H.Y. Liu, Na Si and Ji-Huan He

The purpose of this paper is to point out a paradox in variational theory for viscous flows. Chien (1984) claimed that a variational principle of maximum power loses for viscous…

Abstract

Purpose

The purpose of this paper is to point out a paradox in variational theory for viscous flows. Chien (1984) claimed that a variational principle of maximum power loses for viscous fluids was established, however, it violated the well-known Helmholtz’s principle.

Design/methodology/approach

Restricted variables are introduced in the derivation, the first order and the second order of variation of the restricted variables are zero.

Findings

An approximate variational principle of minimum power loses is established, which agrees with the Helmholtz’s principle, and the paradox is solved.

Research limitations/implications

This paper focusses on incompressible viscose flows, and the theory can be extended to compressible one and other viscose flows. It is still difficult to obtain a variational formulation for Navier-Stokes equations.

Practical implications

The variational principle of minimum power loses can be directly used for numerical methods and analytical analysis.

Originality/value

It is proved that Chien’s variational principle is a minimum principle.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 26 no. 3/4
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 1 January 1987

M.A. GIL and N. CORRAL

In a previous paper the minimum inaccuracy principle was suggested as an operative method for estimating population parameters when the available experimental information could…

Abstract

In a previous paper the minimum inaccuracy principle was suggested as an operative method for estimating population parameters when the available experimental information could not be perceived as an exact outcome, but rather as fuzzy information. This principle is an extension of the maximum likelihood principle of estimating from exact experimental data. In this paper, the particularization of the first method to the case in which each fuzzy information reduces to a class of extact observations is developed. We then analyze certain correspondence between the maximum likelihood and minimum inaccuracy principles in estimating parameters after grouping data. In addition, we prove that the second method approximates to the first one when a certain natural grouping, or choice of classes, is accomplished. Finally, in order to illustrate the preceding results, some relevant particular cases are examined.

Details

Kybernetes, vol. 16 no. 1
Type: Research Article
ISSN: 0368-492X

Book part
Publication date: 23 June 2016

Amos Golan and Robin L. Lumsdaine

Although in principle prior information can significantly improve inference, incorporating incorrect prior information will bias the estimates of any inferential analysis. This…

Abstract

Although in principle prior information can significantly improve inference, incorporating incorrect prior information will bias the estimates of any inferential analysis. This fact deters many scientists from incorporating prior information into their inferential analyses. In the natural sciences, where experiments are more regularly conducted, and can be combined with other relevant information, prior information is often used in inferential analysis, despite it being sometimes nontrivial to specify what that information is and how to quantify that information. In the social sciences, however, prior information is often hard to come by and very hard to justify or validate. We review a number of ways to construct such information. This information emerges naturally, either from fundamental properties and characteristics of the systems studied or from logical reasoning about the problems being analyzed. Borrowing from concepts and philosophical reasoning used in the natural sciences, and within an info-metrics framework, we discuss three different, yet complimentary, approaches for constructing prior information, with an application to the social sciences.

Details

Essays in Honor of Aman Ullah
Type: Book
ISBN: 978-1-78560-786-8

Keywords

Article
Publication date: 1 October 1996

Guy Jumarie

Surveys some of the important contributions of information theory (IT) to the understanding of systems science and cybernetics. Presents a short background on the main definitions…

Abstract

Surveys some of the important contributions of information theory (IT) to the understanding of systems science and cybernetics. Presents a short background on the main definitions of IT, and examines in which way IT could be thought of as a unified approach to general systems. Analyses the topics: syntax and semantics in information, information and self‐organization, entropy of forms (entropy of non‐random functions), and information in dynamical systems. Enumerates some suggestions for further research and takes this opportunity to describe new points of view, mainly by using entropy of non‐random functions.

Details

Kybernetes, vol. 25 no. 7/8
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 October 1996

George J. Klir and David Harmanec

Provides an overview of major developments pertaining to generalized information theory during the lifetime of Kybernetes. Generalized information theory is viewed as a collection…

571

Abstract

Provides an overview of major developments pertaining to generalized information theory during the lifetime of Kybernetes. Generalized information theory is viewed as a collection of concepts, theorems, principles, and methods for dealing with problems involving uncertainty‐based information that are beyond the narrow scope of classical information theory. Introduces well‐justified measures of uncertainty in fuzzy set theory, possibility theory, and Dempster‐Shafer theory. Shows how these measures are connected with the classical Hartley measure and Shannon entropy. Discusses basic issues regarding some principles of generalized uncertainty‐based information.

Details

Kybernetes, vol. 25 no. 7/8
Type: Research Article
ISSN: 0368-492X

Keywords

Abstract

Details

Applying Maximum Entropy to Econometric Problems
Type: Book
ISBN: 978-0-76230-187-4

Abstract

Details

Applying Maximum Entropy to Econometric Problems
Type: Book
ISBN: 978-0-76230-187-4

Article
Publication date: 1 February 1983

T.G. AVGERIS

The Mutual Information Princip le (MIP) has already been used in various areas, as a generalization of the Maximum Entropy Principle (MEP), in the very common situation where our…

Abstract

The Mutual Information Princip le (MIP) has already been used in various areas, as a generalization of the Maximum Entropy Principle (MEP), in the very common situation where our measurements of a random variable contain errors having some known average value. An axiomatic derivation of the MIP is given below, in order to place it in a rigorous mathematical framework with the least possible intuitive arguments. The procedure followed is similar to the one proposed by Shore and Johnson for the Minimum Cross‐entropy Principle, and some relationships between the two methods of inductive inference are pointed out.

Details

Kybernetes, vol. 12 no. 2
Type: Research Article
ISSN: 0368-492X

1 – 10 of over 47000