To read this content please select one of the options below:

Some theoretical results of learning theory based on random sets in set‐valued probability space

Minghu Ha (College of Mathematics and Computer Sciences, Hebei University, Baoding, People's Republic of China)
Witold Pedrycz (Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada and Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland)
Jiqiang Chen (College of Mathematics and Computer Sciences, Hebei University, Baoding, People's Republic of China)
Lifang Zheng (College of Mathematics and Computer Sciences, Hebei University, Baoding, People's Republic of China)

Kybernetes

ISSN: 0368-492X

Article publication date: 10 April 2009

324

Abstract

Purpose

The purpose of this paper is to introduce some basic knowledge of statistical learning theory (SLT) based on random set samples in set‐valued probability space for the first time and generalize the key theorem and bounds on the rate of uniform convergence of learning theory in Vapnik, to the key theorem and bounds on the rate of uniform convergence for random sets in set‐valued probability space. SLT based on random samples formed in probability space is considered, at present, as one of the fundamental theories about small samples statistical learning. It has become a novel and important field of machine learning, along with other concepts and architectures such as neural networks. However, the theory hardly handles statistical learning problems for samples that involve random set samples.

Design/methodology/approach

Being motivated by some applications, in this paper a SLT is developed based on random set samples. First, a certain law of large numbers for random sets is proved. Second, the definitions of the distribution function and the expectation of random sets are introduced, and the concepts of the expected risk functional and the empirical risk functional are discussed. A notion of the strict consistency of the principle of empirical risk minimization is presented.

Findings

The paper formulates and proves the key theorem and presents the bounds on the rate of uniform convergence of learning theory based on random sets in set‐valued probability space, which become cornerstones of the theoretical fundamentals of the SLT for random set samples.

Originality/value

The paper provides a studied analysis of some theoretical results of learning theory.

Keywords

Citation

Ha, M., Pedrycz, W., Chen, J. and Zheng, L. (2009), "Some theoretical results of learning theory based on random sets in set‐valued probability space", Kybernetes, Vol. 38 No. 3/4, pp. 635-657. https://doi.org/10.1108/03684920910944867

Publisher

:

Emerald Group Publishing Limited

Copyright © 2009, Emerald Group Publishing Limited

Related articles