Search results
1 – 10 of 229Minghu Ha, Jiqiang Chen, Witold Pedrycz and Lu Sun
Bounds on the rate of convergence of learning processes based on random samples and probability are one of the essential components of statistical learning theory (SLT). The…
Abstract
Purpose
Bounds on the rate of convergence of learning processes based on random samples and probability are one of the essential components of statistical learning theory (SLT). The constructive distribution‐independent bounds on generalization are the cornerstone of constructing support vector machines. Random sets and set‐valued probability are important extensions of random variables and probability, respectively. The paper aims to address these issues.
Design/methodology/approach
In this study, the bounds on the rate of convergence of learning processes based on random sets and set‐valued probability are discussed. First, the Hoeffding inequality is enhanced based on random sets, and then making use of the key theorem the non‐constructive distribution‐dependent bounds of learning machines based on random sets in set‐valued probability space are revisited. Second, some properties of random sets and set‐valued probability are discussed.
Findings
In the sequel, the concepts of the annealed entropy, the growth function, and VC dimension of a set of random sets are presented. Finally, the paper establishes the VC dimension theory of SLT based on random sets and set‐valued probability, and then develops the constructive distribution‐independent bounds on the rate of uniform convergence of learning processes. It shows that such bounds are important to the analysis of the generalization abilities of learning machines.
Originality/value
SLT is considered at present as one of the fundamental theories about small statistical learning.
Details
Keywords
Minghu Ha, Witold Pedrycz, Jiqiang Chen and Lifang Zheng
The purpose of this paper is to introduce some basic knowledge of statistical learning theory (SLT) based on random set samples in set‐valued probability space for the first time…
Abstract
Purpose
The purpose of this paper is to introduce some basic knowledge of statistical learning theory (SLT) based on random set samples in set‐valued probability space for the first time and generalize the key theorem and bounds on the rate of uniform convergence of learning theory in Vapnik, to the key theorem and bounds on the rate of uniform convergence for random sets in set‐valued probability space. SLT based on random samples formed in probability space is considered, at present, as one of the fundamental theories about small samples statistical learning. It has become a novel and important field of machine learning, along with other concepts and architectures such as neural networks. However, the theory hardly handles statistical learning problems for samples that involve random set samples.
Design/methodology/approach
Being motivated by some applications, in this paper a SLT is developed based on random set samples. First, a certain law of large numbers for random sets is proved. Second, the definitions of the distribution function and the expectation of random sets are introduced, and the concepts of the expected risk functional and the empirical risk functional are discussed. A notion of the strict consistency of the principle of empirical risk minimization is presented.
Findings
The paper formulates and proves the key theorem and presents the bounds on the rate of uniform convergence of learning theory based on random sets in set‐valued probability space, which become cornerstones of the theoretical fundamentals of the SLT for random set samples.
Originality/value
The paper provides a studied analysis of some theoretical results of learning theory.
Details
Keywords
Wei Zhang, Peitong Cong, Kang Bian, Wei-Hai Yuan and Xichun Jia
Understanding the fluid flow through rock masses, which commonly consist of rock matrix and fractures, is a fundamental issue in many application areas of rock engineering. As the…
Abstract
Purpose
Understanding the fluid flow through rock masses, which commonly consist of rock matrix and fractures, is a fundamental issue in many application areas of rock engineering. As the equivalent porous medium approach is the dominant approach for engineering applications, it is of great significance to estimate the equivalent permeability tensor of rock masses. This study aims to develop a novel numerical approach to estimate the equivalent permeability tensor for fractured porous rock masses.
Design/methodology/approach
The radial point interpolation method (RPIM) and finite element method (FEM) are coupled to simulate the seepage flow in fractured porous rock masses. The rock matrix is modeled by the RPIM, and the fractures are modeled explicitly by the FEM. A procedure for numerical experiments is then designed to determinate the equivalent permeability tensor directly on the basis of Darcy’s law.
Findings
The coupled RPIM-FEM method is a reliable numerical method to analyze the seepage flow in fractured porous rock masses, which can consider simultaneously the influences of fractures and rock matrix. As the meshes of rock matrix and fracture network are generated separately without considering the topology relationship between them, the mesh generation process can be greatly facilitated. Using the proposed procedure for numerical experiments, which is designed directly on the basis of Darcy’s law, the representative elementary volume and equivalent permeability tensor of fractured porous rock masses can be identified conveniently.
Originality/value
A novel numerical approach to estimate the equivalent permeability tensor for fractured porous rock masses is proposed. In the approach, the RPIM and FEM are coupled to simulate the seepage flow in fractured porous rock masses, and then a numerical experiment procedure directly based on Darcy’s law is introduced to estimate the equivalent permeability tensor.
Details
Keywords
Intends to address a fundamental problem in maintenance engineering: how should the shutdown of a production system be scheduled? In this regard, intends to investigate a way to…
Abstract
Purpose
Intends to address a fundamental problem in maintenance engineering: how should the shutdown of a production system be scheduled? In this regard, intends to investigate a way to predict the next system failure time based on the system historical performances.
Design/methodology/approach
GM(1,1) model from the grey system theory and the fuzzy set statistics methodologies are used.
Findings
It was found out that the system next unexpected failure time can be predicted by grey system theory model as well as fuzzy set statistics methodology. Particularly, the grey modelling is more direct and less complicated in mathematical treatments.
Research implications
Many maintenance models have developed but most of them are seeking optimality from the viewpoint of probabilistic theory. A new filtering theory based on grey system theory is introduced so that any actual system functioning (failure) time can be effectively partitioned into system characteristic functioning times and repair improvement (damage) times.
Practical implications
In today's highly competitive business world, the effectively address the production system's next failure time can guarantee the quality of the product and safely secure the delivery of product in schedule under contract. The grey filters have effectively addressed the next system failure time which is a function of chronological time of the production system, the system behaviour of near future is clearly shown so that management could utilize this state information for production and maintenance planning.
Originality/value
Provides a viewpoint on system failure‐repair predictions.
Details
Keywords
M'Hamed El-Louh, Mohammed El Allali and Fatima Ezzaki
In this work, the authors are interested in the notion of vector valued and set valued Pettis integrable pramarts. The notion of pramart is more general than that of martingale…
Abstract
Purpose
In this work, the authors are interested in the notion of vector valued and set valued Pettis integrable pramarts. The notion of pramart is more general than that of martingale. Every martingale is a pramart, but the converse is not generally true.
Design/methodology/approach
In this work, the authors present several properties and convergence theorems for Pettis integrable pramarts with convex weakly compact values in a separable Banach space.
Findings
The existence of the conditional expectation of Pettis integrable mutifunctions indexed by bounded stopping times is provided. The authors prove the almost sure convergence in Mosco and linear topologies of Pettis integrable pramarts with values in (cwk(E)) the family of convex weakly compact subsets of a separable Banach space.
Originality/value
The purpose of the present paper is to present new properties and various new convergence results for convex weakly compact valued Pettis integrable pramarts in Banach space.
Details
Keywords
Xiaobin Lian, Jiafu Liu, Laohu Yuan and Naigang Cui
The purpose of this paper is to present a solution for the uncertain fault with the propulsion subsystem of satellite formation, using the Lur’e differential inclusion linear…
Abstract
Purpose
The purpose of this paper is to present a solution for the uncertain fault with the propulsion subsystem of satellite formation, using the Lur’e differential inclusion linear state observers (DILSOs) and fuzzy wavelet neural network (FWNN) to perform fault detection and diagnosis.
Design/methodology/approach
The uncertain fault system cannot be described based on the accurate differential equations. The set-value mapping is introduced into the state equations to solve the problem of uncertainty, but it will cause output uncertainty. The problem can be solved by linearization of Lur’e differential inclusion state observers. The Lur’e DILSOs can be used to detect uncertain fault. The fault isolation and estimation can be performed using the FWNN.
Findings
The mixed approach from fault detection and diagnosis has featured fast and correct to found the uncertain fault. The simulation results to indicate that the methods of design are not only effective but also have the advantages of good approximation effect, fast detection speed, relatively simple structure and prior knowledge and realization of adaptive learning.
Research limitations/implications
The hybrid algorithm can be extensively applied to engineering practice and find uncertain faults of the propulsion subsystem of satellite formation promptly.
Originality/value
This paper provides a fast, effective and simple mixed fault detection and diagnosis scheme for satellite formation.
Details
Keywords
This paper aims to offer a tutorial/introduction to new statistics arising from the theory of optimal transport to empirical researchers in econometrics and machine learning.
Abstract
Purpose
This paper aims to offer a tutorial/introduction to new statistics arising from the theory of optimal transport to empirical researchers in econometrics and machine learning.
Design/methodology/approach
Presenting in a tutorial/survey lecture style to help practitioners with the theoretical material.
Findings
The tutorial survey of some main statistical tools (arising from optimal transport theory) should help practitioners to understand the theoretical background in order to conduct empirical research meaningfully.
Originality/value
This study is an original presentation useful for new comers to the field.
Details
Keywords
Boris Mitavskiy, Jonathan Rowe and Chris Cannings
The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory that will…
Abstract
Purpose
The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory that will lead to novel Monte‐Carlo sampling algorithms that provably increase the AI potential.
Design/methodology/approach
In the current paper the authors set up a mathematical framework, state and prove a version of a Geiringer‐like theorem that is very well‐suited for the development of Mote‐Carlo sampling algorithms to cope with randomness and incomplete information to make decisions.
Findings
This work establishes an important theoretical link between classical population genetics, evolutionary computation theory and model free reinforcement learning methodology. Not only may the theory explain the success of the currently existing Monte‐Carlo tree sampling methodology, but it also leads to the development of novel Monte‐Carlo sampling techniques guided by rigorous mathematical foundation.
Practical implications
The theoretical foundations established in the current work provide guidance for the design of powerful Monte‐Carlo sampling algorithms in model free reinforcement learning, to tackle numerous problems in computational intelligence.
Originality/value
Establishing a Geiringer‐like theorem with non‐homologous recombination was a long‐standing open problem in evolutionary computation theory. Apart from overcoming this challenge, in a mathematically elegant fashion and establishing a rather general and powerful version of the theorem, this work leads directly to the development of novel provably powerful algorithms for decision making in the environment involving randomness, hidden or incomplete information.
Details
Keywords
Proposes a possibilistic group support system (PGSS) for the retailer pricing and inventory problem when possibilistic fluctuations of product parameters are controlled by a set…
Abstract
Proposes a possibilistic group support system (PGSS) for the retailer pricing and inventory problem when possibilistic fluctuations of product parameters are controlled by a set of possibilistic optimality conditions. Experts in various functional areas convey their subjective judgement to the PGSS in the form of analytical models (for product parameters estimation), fuzzy concepts (facts), and possibilistic propositions (for validation and choice procedures). Basic probability assignments are used to elicit experts’ opinions. They are then transformed into compatibility functions for fuzzy concepts using the falling shadow technique. Evidence is processed in the form of fuzzy concepts, then is rewritten back to basic probability assignments using the principle of least ignorance on randomness. The PGSS allows the user (inventory control) to examine a trade‐off between the belief value of a greater profit and a lower amount of randomness associated with it.
Details