Search results
1 – 10 of 153Miao Yu, Jun Gong, Jiafu Tang and Fanwen Kong
The purpose of this paper is to provide delay announcements for call centers with hyperexponential patience modeling. The paper aims to employ a state-dependent Markovian…
Abstract
Purpose
The purpose of this paper is to provide delay announcements for call centers with hyperexponential patience modeling. The paper aims to employ a state-dependent Markovian approximation for informing arriving customers about anticipated delay in a real call center.
Design/methodology/approach
Motivated by real call center data, the patience distribution is modeled by the hyperexponential distribution and is analyzed by its realistic significance, with and without delay information. Appropriate M/M/s/r+H2 queueing model is structured, including a voice response system that is employed in practice, and a state-dependent Markovian approximation is applied for computing abandonment. Based on this approximation, a method is proposed for estimating virtual delays, and it is investigated about the problem of announcing virtual delays to customers upon their arrival.
Findings
There are two parts of findings from the results obtained from the case study and a numerical study of simulation comparisons. First, using an H2 distribution for the abandonment distribution is driven by an empirical study which shows its good fit to real-life call center data. Second, simulation experiments indicate that the model and approximation are reasonable, and the state-dependent Markovian approximation works very well for call centers with larger pooling. It is concluded that our approach can be applied in a voice response system of real call centers.
Originality/value
Many results pertain to announcing delay information, customer reactions and links to estimating hyperexponential distribution based on real data that have not been established in previous studies; however, this paper analytically characterizes these performance measures for delay announcements.
Details
Keywords
The aim of this paper is to propose and analyse policies capable of generating left‐skewed pension distributions. Such policies can deliver large pension values with high…
Abstract
Purpose
The aim of this paper is to propose and analyse policies capable of generating left‐skewed pension distributions. Such policies can deliver large pension values with high probability and hence are of interest to practical fund managers.
Design/methodology/approach
The paper uses a computational method capable of solving stochastic optimal control problems. The optimal strategies obtained through the method are used to simulate dynamic portfolio management.
Findings
The paper finds that optimisation of locally non‐concave performance measures has produced left‐skewed payoff distributions of small VaR and CVaR. The distributions remain left‐skewed for relatively large values of the diffusion parameter.
Practical implications
On the basis of the findings, it would seem beneficial for real‐world fund managers to implement this kind of optimising “cautious‐relaxed” policy.
Originality/value
A novel non‐concave performance measure has been proposed in the paper to describe a portfolio manager's aim. The computed “cautious‐relaxed” policies have been shown to realise this aim.
Details
Keywords
Cheng-De Zheng and Zhanshan Wang
The purpose of this paper is to develop a methodology for the stochastically asymptotic synchronization problem for a class of neutral-type chaotic neural networks with both…
Abstract
Purpose
The purpose of this paper is to develop a methodology for the stochastically asymptotic synchronization problem for a class of neutral-type chaotic neural networks with both leakage delay and Markovian jumping parameters under impulsive perturbations.
Design/methodology/approach
The authors perform drive-response concept and time-delay feedback control techniques to investigate a class of neutral-type chaotic neural networks with both leakage delay and Markovian jumping parameters under impulsive perturbations. New sufficient criterion is established without strict conditions imposed on the activation functions.
Findings
It turns out that the approach results in new sufficient criterion easy to be verified but without the usual assumption of the differentiability and monotonicity of the activation functions. Two examples show the effectiveness of the obtained results.
Originality/value
The novelty of the proposed approach lies in removing the usual assumption of the differentiability and monotonicity of the activation functions, and the use of the Lyapunov functional method, Jensen integral inequality, a novel Gu’s lemma, reciprocal convex and linear convex combination technique for the stochastically asymptotic synchronization problem for a class of neutral-type chaotic neural networks with both leakage delay and Markovian jumping parameters under impulsive perturbations.
Details
Keywords
A method for approximation of the Shannon entropy of Gaussian photon‐counting processes with infinite history was constructed on the memory function of these processes, described…
Abstract
A method for approximation of the Shannon entropy of Gaussian photon‐counting processes with infinite history was constructed on the memory function of these processes, described by autoregressive‐integrated moving average (ARIMA) models. Most frequently, photon‐counting processes are stationary or nonstationary multidimensional Gaussian discrete‐time stochastic ones which justify the use of the ARIMA models. Starting from the memory function, a memory time‐equivalent finite autoregressive representation of a given process with infinite history, i.e. a stationary finite‐order Gaussian Markov chain, was determined, then corresponding autocorrelation matrices were calculated from the truncated memory function using the Yule‐Walker equations, and an autocorrelation‐based formula for approximation of the entropy of the process through the entropy of its stationary Markovian representation was given. An ARMA(1,1) process together with its stationary (MA(1)) or nonstationary (IMA(0,1,1)) boundary cases were considered to demonstrate opposite changes in the entropy as the memory time increases at a fixed variance of the process: the entropy was found to decrease for stationary processes and increase for nonstationary ones. It was also found on experimental examples (perturbed human neutrophils and yeast cells) that those changes can be reversed by opposite changes in the process variance. The method allows us to determine, at any desired accuracy, the Shannon entropy of time‐discrete stochastic processes, and reveals new aspects of the relationship between the process' stationarity, memory, entropy and heteroskedasticity.
Details
Keywords
The purpose of this paper is to develop a methodology for the stochastically asymptotic stability of fuzzy Markovian jumping neural networks with time-varying delay and…
Abstract
Purpose
The purpose of this paper is to develop a methodology for the stochastically asymptotic stability of fuzzy Markovian jumping neural networks with time-varying delay and continuously distributed delay in mean square.
Design/methodology/approach
The authors perform Briat Lemma, multiple integral approach and linear convex combination technique to investigate a class of fuzzy Markovian jumping neural networks with time-varying delay and continuously distributed delay. New sufficient criterion is established by linear matrix inequalities conditions.
Findings
It turns out that the obtained methods are easy to be verified and result in less conservative conditions than the existing literature. Two examples show the effectiveness of the proposed results.
Originality/value
The novelty of the proposed approach lies in establishing a new Wirtinger-based integral inequality and the use of the Lyapunov functional method, Briat Lemma, multiple integral approach and linear convex combination technique for stochastically asymptotic stability of fuzzy Markovian jumping neural networks with time-varying delay and continuously distributed delay in mean square.
Details
Keywords
Pavel Pakshin and Sergey Soloviev
The purpose of this paper is to provide a parametric description (parametrization) of all static output feedback stabilizing controllers for linear stochastic discrete‐time…
Abstract
Purpose
The purpose of this paper is to provide a parametric description (parametrization) of all static output feedback stabilizing controllers for linear stochastic discrete‐time systems with Markovian switching, applications of this result to simultaneous and robust stabilization problems and obtaining of algorithms for computing stabilizing gains.
Design/methodology/approach
The proposed approach presents parameterization in terms of coupled linear matrix equations and quadratic matrix inequalities which depend on parameter matrices similar to weight matrices in linear quadratic regulator (LQR) theory. To avoid implementation problems, a convex approximation technique is used and linear matrix inequalities (LMI)‐based algorithms are obtained for computing of stabilizing gain.
Findings
The algorithms obtained in this paper are non‐iterative and used computationally efficient LMI technique. Moreover, it is possible to use well‐known LQR methodology in the process of controller design.
Originality/value
As a result of this paper, a new unified approach to design of static output feedback stabilizing control is developed. This approach leads to efficient stabilizing gain computation algorithms for both stochastic systems with Markovian switching and deterministic systems with polytopic uncertainty.
Details
Keywords
Markov chains and queuing theory are widely used analysis, optimization and decision‐making tools in many areas of science and engineering. Real life systems could be modelled and…
Abstract
Purpose
Markov chains and queuing theory are widely used analysis, optimization and decision‐making tools in many areas of science and engineering. Real life systems could be modelled and analysed for their steady‐state and time‐dependent behaviour. Performance measures such as blocking probability of a system can be calculated by computing the probability distributions. A major hurdle in the applicability of these tools to complex large problems is the curse of dimensionality problem because models for even trivial real life systems comprise millions of states and hence require large computational resources. This paper describes the various computational dimensions in Markov chains modelling and briefly reports on the author's experiences and developed techniques to combat the curse of dimensionality problem.
Design/methodology/approach
The paper formulates the Markovian modelling problem mathematically and shows, using case studies, that it poses both storage and computational time challenges when applied to the analysis of large complex systems.
Findings
The paper demonstrates using intelligent storage techniques, and concurrent and parallel computing methods that it is possible to solve very large systems on a single or multiple computers.
Originality/value
The paper has developed an interesting case study to motivate the reader and have computed and visualised data for steady‐state analysis of the system performance for a set of seven scenarios. The developed methods reviewed in this paper allow efficient solution of very large Markov chains. Contemporary methods for the solution of Markov chains cannot solve Markov models of the sizes considered in this paper using similar computing machines.
Details
Keywords
In the information theoretic framework, it is customary to address the problem of defining and analyzing complexity and organization of systems either by using Shannon entropy…
Abstract
In the information theoretic framework, it is customary to address the problem of defining and analyzing complexity and organization of systems either by using Shannon entropy, via Jaynes maximum entropy principle, or by means of the so‐called Kullback informational divergence which measures the informational distance between two probability distributions. In the present paper, it is shown that the so‐called self‐divergence of Markovian processes can be a useful complement in this approach. After a short background on entropy and organization, we recall the definition of divergence of Markovian processes, and then it is used to analyze organization and complexity. We arrive at a principle of maximum self‐divergence which characterizes systems with maximum organization.
Details
Keywords
For most practical control system problems, the state variables of a system are not often available or measureable due to technical or economical constraints. In these cases, an…
Abstract
Purpose
For most practical control system problems, the state variables of a system are not often available or measureable due to technical or economical constraints. In these cases, an observer-based controller design problem, which is involved with using the available information on inputs and outputs to reconstruct the unmeasured states, is desirable, and it has been wide investigated in many practical applications. However, the investigation on a discrete-time singular Markovian jumping system is few so far. This paper aims to consider an observer-based control problem for a discrete-time singular Markovian jumping system and provides a set of easy-used conditions to the proposed control law.
Design/methodology/approach
According to the connotation of the separation principle extended from linear systems, a mode-dependent observer and a state-feedback controller is designed and carried out independently via two sets of derived necessary and sufficient conditions in terms of linear matrix inequalities (LMIs).
Findings
A set of necessary and sufficient conditions for an admissibility analysis problem related to a discrete-time singular Markovian jumping system is derived to be a doctrinal foundation for the proposed design problems. A mode-dependent observer and a controller for such systems could be designed via two sets of strictly LMI-based synthesis conditions.
Research limitations/implications
The proposed method can be applied to discrete-time singular Markovian jumping systems with transition probability pij > 0 rather than the ones with pii = 0.
Practical implications
The formulated problem and proposed methods have extensive applications in various fields such as power systems, electrical circuits, robot systems, chemical systems, networked control systems and interconnected large-scale systems. Take robotic networked control systems for example. It is recognized that the variance phenomena derived from network transmission, such as packets dropout, loss and disorder, are suitable for modeling as a system with Markovian jumping modes, while the dynamics of the robot systems can be described by singular systems. In addition, the packets dropout or loss might result in unreliable transmission signals which motivates an observer-based control problem.
Originality/value
Both of the resultant conditions of analysis and synthesis problems for a discrete-time singular Markovian jumping system are necessary and sufficient, and are formed in strict LMIs, which can be used and implemented easily via MATLAB toolbox.
Details
Keywords
Mohit Goswami, M. Ramkumar and Yash Daultani
This research aims to aid product development managers to estimate the expected cost associated with the development of cost-intensive physical prototypes considering transitions…
Abstract
Purpose
This research aims to aid product development managers to estimate the expected cost associated with the development of cost-intensive physical prototypes considering transitions associated with pertinent states of quality of the prototype and corresponding decision policies under the Markovian setting.
Design/methodology/approach
The authors evolve two types of optimization-based mathematical models under both deterministic and randomized policies. Under the deterministic policy, the product development managers take certain decisions such as “Do nothing,” “Overhaul,” or “Replace” corresponding to different quality states of prototype such as “Good as new,” “Functional with minor deterioration,” “Functional with major deterioration” and “Non-functional.” Under the randomized policy, the product development managers ascertain the probability distribution associated with these decisions corresponding to various states of quality. In both types of mathematical models, i.e. related to deterministic and randomized settings, minimization of the expected cost of the prototype remains the objective function.
Findings
Employing an illustrative case of the operator cabin from the construction equipment domain, the authors ascertain that randomized policy provides us with better decision interventions such that the expected cost of the prototype remains lower than that associated with the deterministic policy. The authors also ascertain the steady-state probabilities associated with a prototype remaining in a particular quality state. These findings have implications for product development budget, time to market, product quality, etc.
Originality/value
The authors’ work contributes toward the development of optimization-driven mathematical models that can encapsulate the nuances related to the uncertainty of transition of quality states of a prototype, decision policies at each quality state of the prototype while considering such facets for all constituent subsystems of the prototype. As opposed to a typical prescriptive study, their study captures the inherent uncertainties associated with states of quality in the context of prototype testing, etc.
Details