Search results
1 – 10 of over 3000Minghu Ha, Witold Pedrycz, Jiqiang Chen and Lifang Zheng
The purpose of this paper is to introduce some basic knowledge of statistical learning theory (SLT) based on random set samples in set‐valued probability space for the first time…
Abstract
Purpose
The purpose of this paper is to introduce some basic knowledge of statistical learning theory (SLT) based on random set samples in set‐valued probability space for the first time and generalize the key theorem and bounds on the rate of uniform convergence of learning theory in Vapnik, to the key theorem and bounds on the rate of uniform convergence for random sets in set‐valued probability space. SLT based on random samples formed in probability space is considered, at present, as one of the fundamental theories about small samples statistical learning. It has become a novel and important field of machine learning, along with other concepts and architectures such as neural networks. However, the theory hardly handles statistical learning problems for samples that involve random set samples.
Design/methodology/approach
Being motivated by some applications, in this paper a SLT is developed based on random set samples. First, a certain law of large numbers for random sets is proved. Second, the definitions of the distribution function and the expectation of random sets are introduced, and the concepts of the expected risk functional and the empirical risk functional are discussed. A notion of the strict consistency of the principle of empirical risk minimization is presented.
Findings
The paper formulates and proves the key theorem and presents the bounds on the rate of uniform convergence of learning theory based on random sets in set‐valued probability space, which become cornerstones of the theoretical fundamentals of the SLT for random set samples.
Originality/value
The paper provides a studied analysis of some theoretical results of learning theory.
Details
Keywords
Minghu Ha, Jiqiang Chen, Witold Pedrycz and Lu Sun
Bounds on the rate of convergence of learning processes based on random samples and probability are one of the essential components of statistical learning theory (SLT). The…
Abstract
Purpose
Bounds on the rate of convergence of learning processes based on random samples and probability are one of the essential components of statistical learning theory (SLT). The constructive distribution‐independent bounds on generalization are the cornerstone of constructing support vector machines. Random sets and set‐valued probability are important extensions of random variables and probability, respectively. The paper aims to address these issues.
Design/methodology/approach
In this study, the bounds on the rate of convergence of learning processes based on random sets and set‐valued probability are discussed. First, the Hoeffding inequality is enhanced based on random sets, and then making use of the key theorem the non‐constructive distribution‐dependent bounds of learning machines based on random sets in set‐valued probability space are revisited. Second, some properties of random sets and set‐valued probability are discussed.
Findings
In the sequel, the concepts of the annealed entropy, the growth function, and VC dimension of a set of random sets are presented. Finally, the paper establishes the VC dimension theory of SLT based on random sets and set‐valued probability, and then develops the constructive distribution‐independent bounds on the rate of uniform convergence of learning processes. It shows that such bounds are important to the analysis of the generalization abilities of learning machines.
Originality/value
SLT is considered at present as one of the fundamental theories about small statistical learning.
Details
Keywords
The purpose of this paper is to comparatively analyze the electrical circuits defined with the conventional and revisited time domain circuit element definitions in the context of…
Abstract
Purpose
The purpose of this paper is to comparatively analyze the electrical circuits defined with the conventional and revisited time domain circuit element definitions in the context of fractional conformable calculus and to promote the combined usage of conventional definitions, fractional conformable derivative and conformable Laplace transform.
Design/methodology/approach
The RL, RC, LC and RLC circuits described by both conventional and revisited time domain circuit element definitions has been analyzed by means of the fractional conformable derivative based differential equations and conformable Laplace transform. The comparison among the obtained results and those based on the methodologies adopted in the previous works has been made.
Findings
The author has found that the conventional definitions-based solution gives a physically reasonable result unlike its revisited definitions-based counterpart and the solutions based on those previous methodologies. A strong agreement to the time domain state space concept-based solution can be observed. The author has also shown that the scalar valued solution can be directly obtained by singularity free conformable Laplace transform-based methodology unlike such state space concept based one.
Originality/value
For the first time, the revisited time domain definitions of resistance and inductance have been proposed and applied together with the revisited definition of capacitance in electrical circuit analyses. The advantage of the combined usage of conventional time definitions, fractional conformable derivative and conformable Laplace transform has been suggested and the impropriety of applying the revisited definitions in circuit analysis has been pointed out.
Details
Keywords
- Conformable Laplace transform
- Conventional time domain circuit element definition
- Fractional conformable derivative
- Hamiltonian
- Lagrangian
- Local fractional derivative
- Nonlocal fractional derivative
- Revisited time domain circuit element definition
- Circuit analysis
- Transient analysis
- Time domain modelling
Kyung Eun Lim, Jee Seon Baek and Eui Yong Lee
A random shock model for a system whose state deteriorates continuously is introduced and stochastically analyzed. It is assumed in the model that the state of the system follows…
Abstract
Purpose
A random shock model for a system whose state deteriorates continuously is introduced and stochastically analyzed. It is assumed in the model that the state of the system follows a Brownian motion with negative drift and is also subject to random shocks. A repairman arrives at the system according to a Poisson process and repairs the system if the state has been below a threshold since the last repair.
Design/methodology/approach
Kolmogorov's forward differential equation is adopted together with a renewal argument to analyze the model stochastically. The renewal reward theorem is used to obtain the long‐run average cost per unit time.
Findings
An explicit expression is deduced for the stationary distribution of the state of the system. After assigning several costs to the system, an optimization is also studied as an example.
Practical implications
The present model can be used to manage a complex system whose state deteriorates both continuously and jumpwise due to the continuous wear and random shocks, such as a machine and a production line in a factory. The model can also be applied to an inventory which supplies the stock both continuously and jumpwise, such as a gas station and the distribution center for a franchise, if the continuous wear and random shocks are considered as demands for the stock.
Originality/value
The present model is quite complicate, however, more realistic than the previous models where the state of the system is subject to either one of continuous wear and random shocks.
Details
Keywords
Salih Tekin, Kemal Bicakci, Ozgur Mersin, Gulnur Neval Erdem, Abdulkerim Canbay and Yusuf Uzunay
With the irresistible growth in digitization, data backup policies become essential more than ever for organizations seeking to improve reliability and availability of…
Abstract
Purpose
With the irresistible growth in digitization, data backup policies become essential more than ever for organizations seeking to improve reliability and availability of organizations' information systems. However, since backup operations do not come free, there is a need for a data-informed policy to decide how often and which type of backups should be taken. In this paper, the authors present a comprehensive mathematical framework to explore the design space for backup policies and to optimize backup type and interval in a given system. In the authors' framework, three separate cost factors related to the backup process are identified: backup cost, recovery cost and data loss cost. The objective function has a multi-criteria structure leading to a backup policy minimizing a weighed function of these factors. To formalize the cost and objective functions, the authors get help from renewal theory in reliability modeling. The authors' optimization framework also formulates mixed policies involving both full and incremental backups. Through numerical examples, the authors show how the authors' optimization framework could facilitate cost-saving backup policies.
Design/methodology/approach
The methodology starts with designing different backup policies based on system parameters. Each constructed policy is optimized in terms of backup period using renewal theory. After selecting the best back-up policy, the results are demonstrated through numerical studies.
Findings
Data backup polices that are tailored to system parameters can result in significant gains for IT (Information Technology) systems. Collecting the necessary parameters to design intelligent backup policies can also help managers understand managers' systems better. Designed policies not only provides the frequency of back up operations, but also the type of backups.
Originality/value
The original contribution of this study is the explicit construction and determination of the best backup policies for IT systems that are prone to failure. By applying renewal theory in reliability, the authors present a mathematical framework for the joint optimization of backup cost factors, i.e. backup cost, recovery time cost and data loss cost.
Details
Keywords
The purpose of the paper is to analyze reliability characteristics of batch service queuing system with a single server model that envisages Poisson input process and exponential…
Abstract
Purpose
The purpose of the paper is to analyze reliability characteristics of batch service queuing system with a single server model that envisages Poisson input process and exponential service times under first come, first served (FCFS) queue discipline.
Design/methodology/approach
With the help of renewal theory and stochastic processes, a model has been designed to discuss the reliability and its characteristics.
Findings
The instantaneous and steady-state availability along with the maintenance model of the systems subject to generalized M/Mb/1 queuing model is derived, and a few particular cases for availability are obtained as well. For supporting the developed model, a case study on electrical distribution system (EDS) has been illustrated, which also includes a comparison for the system subject to M/Mb/1 queuing model and the system without any queue (delay).
Originality/value
It is a quite realistic model that may aid to remove congestion in the system while repairing.
Details
Keywords
Uzoma Vincent Patrick-Agulonye
The purpose of this study is to determine the impact of community-based and driven approaches during the lockdowns and early periods of the pandemic. The study examines the impact…
Abstract
Purpose
The purpose of this study is to determine the impact of community-based and driven approaches during the lockdowns and early periods of the pandemic. The study examines the impact and perceptions of the state-led intervention. This would help to discover a better approach for postpandemic interventions and policy responses.
Design/methodology/approach
This article used the inductive method and gathered its data from surveys. In search of global opinions on COVID-19 responses received in communities, two countries in each continent with high COVID-19 infection per 100,000 during the peak period were chosen for study. In total, 13 community workers, leaders and members per continent were sampled. The simple percentile method was chosen for analysis. The simple interpretation was used to discuss the results.
Findings
The study showed that poor publicity of community-based interventions affected awareness and fame as most were mistaken for government interventions. The study found that most respondents preferred state interventions but preferred many communities or local assessments of projects and interventions while the projects were ongoing to adjust the project and intervention as they progressed. However, many preferred community-based and driven interventions.
Research limitations/implications
State secrecy and perceived opposition oppression limited data sourcing for this study in countries where state interventions are performed in secret and oppression of perceived opposition voices limited data collection in some countries. Thus, last-minute changes were made to gather data from countries on the same continent. An intercontinental study requires data from more countries, which would require more time and resources. This study was affected by access to locals in remote areas where raw data would have benefited the study.
Practical implications
The absence of data from the two most populous countries due to government censorship limits access to over a third of the global population, as they make up 2.8 out of 7 billion.
Social implications
The choice of two countries in each continent is representational enough, yet the absence of data from the two most populous countries creates a social identity gap.
Originality/value
The survey collected unique and genuine data and presents novel results. Thus, this study provides an important contribution to the literature on the subject. There is a need for maximum support for community-based interventions and projects as well as global data collection on community-based or driven interventions and projects.
Details
Keywords
Kaouther Ibn Taarit and Mekki Ksouri
A fast identification algorithm for a linear monotonic process from a step response is proposed in this paper, from which the parameters of a first‐order plus dead‐time model can…
Abstract
Purpose
A fast identification algorithm for a linear monotonic process from a step response is proposed in this paper, from which the parameters of a first‐order plus dead‐time model can be obtained directly.
Design/methodology/approach
The study is based on a non‐asymptotic distributional estimation technique initiated without delay in the framework of systems. Such a technique leads to simple realization schemes, involving integrators, multipliers and piecewise polynomial or exponential time functions and shows a possible link between simultaneous identification and generalized eigenvalue problems. Thus, it allows for a real‐time implementation.
Findings
The effectiveness of the identification method has been demonstrated through a number of simulation examples and a real‐time test.
Originality/value
This paper presents a novel method to simultaneous delay and parameters identification of a stable first‐order plus time delay model from step response that can model a widespread class of systems.
Details
Keywords
Juan Carlos Cuestas and Merike Kukk
This paper aims to investigate the mutual dependence between housing prices and housing credit in Estonia, a country that experienced rapid debt accumulation during the 2000s and…
Abstract
Purpose
This paper aims to investigate the mutual dependence between housing prices and housing credit in Estonia, a country that experienced rapid debt accumulation during the 2000s and big swings in house prices during that period.
Design/methodology/approach
The authors use Bayesian econometric methods on data spanning 2000–2015.
Findings
The estimations show the interdependence between house prices and housing credit. More importantly, negative housing credit innovations had a stronger effect on house prices than positive ones.
Originality/value
The asymmetry in the linkage between housing credit and house prices highlights important policy implications, in that if central banks increase capital buffers during good times, they can release credit conditions during hard times to alleviate the negative spillover into house prices and the real economy.
Details