Search results

1 – 10 of 12
Article
Publication date: 26 July 2011

Rashid Mehmood and Jie A. Lu

Markov chains and queuing theory are widely used analysis, optimization and decision‐making tools in many areas of science and engineering. Real life systems could be modelled and…

Abstract

Purpose

Markov chains and queuing theory are widely used analysis, optimization and decision‐making tools in many areas of science and engineering. Real life systems could be modelled and analysed for their steady‐state and time‐dependent behaviour. Performance measures such as blocking probability of a system can be calculated by computing the probability distributions. A major hurdle in the applicability of these tools to complex large problems is the curse of dimensionality problem because models for even trivial real life systems comprise millions of states and hence require large computational resources. This paper describes the various computational dimensions in Markov chains modelling and briefly reports on the author's experiences and developed techniques to combat the curse of dimensionality problem.

Design/methodology/approach

The paper formulates the Markovian modelling problem mathematically and shows, using case studies, that it poses both storage and computational time challenges when applied to the analysis of large complex systems.

Findings

The paper demonstrates using intelligent storage techniques, and concurrent and parallel computing methods that it is possible to solve very large systems on a single or multiple computers.

Originality/value

The paper has developed an interesting case study to motivate the reader and have computed and visualised data for steady‐state analysis of the system performance for a set of seven scenarios. The developed methods reviewed in this paper allow efficient solution of very large Markov chains. Contemporary methods for the solution of Markov chains cannot solve Markov models of the sizes considered in this paper using similar computing machines.

Details

Journal of Manufacturing Technology Management, vol. 22 no. 6
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 3 January 2017

Rashid Mehmood, Royston Meriton, Gary Graham, Patrick Hennelly and Mukesh Kumar

The purpose of this paper is to advance knowledge of the transformative potential of big data on city-based transport models. The central question guiding this paper is: how could…

4111

Abstract

Purpose

The purpose of this paper is to advance knowledge of the transformative potential of big data on city-based transport models. The central question guiding this paper is: how could big data transform smart city transport operations? In answering this question the authors present initial results from a Markov study. However the authors also suggest caution in the transformation potential of big data and highlight the risks of city and organizational adoption. A theoretical framework is presented together with an associated scenario which guides the development of a Markov model.

Design/methodology/approach

A model with several scenarios is developed to explore a theoretical framework focussed on matching the transport demands (of people and freight mobility) with city transport service provision using big data. This model was designed to illustrate how sharing transport load (and capacity) in a smart city can improve efficiencies in meeting demand for city services.

Findings

This modelling study is an initial preliminary stage of the investigation in how big data could be used to redefine and enable new operational models. The study provides new understanding about load sharing and optimization in a smart city context. Basically the authors demonstrate how big data could be used to improve transport efficiency and lower externalities in a smart city. Further how improvement could take place by having a car free city environment, autonomous vehicles and shared resource capacity among providers.

Research limitations/implications

The research relied on a Markov model and the numerical solution of its steady state probabilities vector to illustrate the transformation of transport operations management (OM) in the future city context. More in depth analysis and more discrete modelling are clearly needed to assist in the implementation of big data initiatives and facilitate new innovations in OM. The work complements and extends that of Setia and Patel (2013), who theoretically link together information system design to operation absorptive capacity capabilities.

Practical implications

The study implies that transport operations would actually need to be re-organized so as to deal with lowering CO2 footprint. The logistic aspects could be seen as a move from individual firms optimizing their own transportation supply to a shared collaborative load and resourced system. Such ideas are radical changes driven by, or leading to more decentralized rather than having centralized transport solutions (Caplice, 2013).

Social implications

The growth of cities and urban areas in the twenty-first century has put more pressure on resources and conditions of urban life. This paper is an initial first step in building theory, knowledge and critical understanding of the social implications being posed by the growth in cities and the role that big data and smart cities could play in developing a resilient and sustainable transport city system.

Originality/value

Despite the importance of OM to big data implementation, for both practitioners and researchers, we have yet to see a systematic analysis of its implementation and its absorptive capacity contribution to building capabilities, at either city system or organizational levels. As such the Markov model makes a preliminary contribution to the literature integrating big data capabilities with OM capabilities and the resulting improvements in system absorptive capacity.

Details

International Journal of Operations & Production Management, vol. 37 no. 1
Type: Research Article
ISSN: 0144-3577

Keywords

Content available
Article
Publication date: 8 August 2018

Sarah E. Evans and Gregory Steeger

In the present fast-paced and globalized age of war, special operations forces have a comparative advantage over conventional forces because of their small, highly-skilled units…

1351

Abstract

Purpose

In the present fast-paced and globalized age of war, special operations forces have a comparative advantage over conventional forces because of their small, highly-skilled units. Largely because of these characteristics, special operations forces spend a disproportionate amount of time deployed. The amount of time spent deployed affects service member’s quality of life and their level of preparedness for the full spectrum of military operations. In this paper, the authors ask the following question: How many force packages are required to sustain a deployed force package, while maintaining predetermined combat-readiness and quality-of-life standards?

Design/methodology/approach

The authors begin by developing standardized deployment-to-dwell metrics to assess the effects of deployments on service members’ quality of life and combat readiness. Next, they model deployment cycles using continuous time Markov chains and derive closed-form equations that relate the amount of time spent deployed versus at home station, rotation length, transition time and the total force size.

Findings

The expressions yield the total force size required to sustain a deployed capability.

Originality/value

Finally, the authors apply the method to the US Air Force Special Operations Command. This research has important implications for the force-structure logistics of any military force.

Details

Journal of Defense Analytics and Logistics, vol. 2 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Article
Publication date: 15 May 2017

Puneet Pasricha, Dharmaraja Selvamuthu and Viswanathan Arunachalam

Credit ratings serve as an important input in several applications in risk management of the financial firms. The level of credit rating changes from time to time because of…

Abstract

Purpose

Credit ratings serve as an important input in several applications in risk management of the financial firms. The level of credit rating changes from time to time because of random credit risk and, thus, can be modeled by an appropriate stochastic process. Markov chain models have been widely used in the literature to generate credit migration matrices; however, emergent empirical evidences suggest that the Markov property is not appropriate for credit rating dynamics. The purpose of this article is to address the non-Markov behavior of the rating dynamics.

Design/methodology/approach

This paper proposes a model based on Markov regenerative process (MRGP) with subordinated semi-Markov process (SMP) to obtain the estimates of rating migration probability matrices and default probabilities. Numerical example is given to illustrate the applicability of the proposed model with the help of historical Standard & Poor’s (S&P) credit rating data.

Findings

The proposed model implies that rating of a firm in the future not only depends on its present rating, but also on its previous ratings. If a firm gets a rating lower than its previous ratings, there are higher chances of further downgrades, and the issue is called the rating momentum. The model also addresses the ageing problem of credit rating evolution.

Originality/value

The contribution of this paper is a more general approach to study the rating dynamics and overcome the issues of inappropriateness of Markov process applied in rating dynamics.

Details

The Journal of Risk Finance, vol. 18 no. 3
Type: Research Article
ISSN: 1526-5943

Keywords

Open Access
Article
Publication date: 9 November 2022

Jing Wang, Nathan N. Huynh and Edsel Pena

This paper evaluates an alternative queuing concept for marine container terminals that utilize a truck appointment system (TAS). Instead of having all lanes providing service to…

Abstract

Purpose

This paper evaluates an alternative queuing concept for marine container terminals that utilize a truck appointment system (TAS). Instead of having all lanes providing service to trucks with appointments, this study considers the case where walk-in lanes are provided to serve those trucks with no appointments or trucks with appointments but arrived late due to traffic congestion.

Design/methodology/approach

To enable the analysis of the proposed alternative queuing strategy, the queuing system is shown mathematically to be stationary. Due to the complexity of the model, a discrete event simulation (DES) model is used to obtain the average waiting number of trucks per lane for both types of service lanes: TAS-lanes and walk-in lanes.

Findings

The numerical experiment results indicated that the considered queuing strategy is most beneficial when the utilization of the TAS lanes is expected to be much higher than that of the walk-in lanes.

Originality/value

The novelty of this study is that it examines the scenario where trucks with appointments switch to the walk-in lanes upon arrival if the TAS-lane server is occupied and the walk-in lane server is not occupied. This queuing strategy/policy could reduce the average waiting time of trucks at marine container terminals. Approximation equations are provided to assist practitioners calculate the average truck queue length and the average truck queuing time for this type of queuing system.

Details

Journal of International Logistics and Trade, vol. 20 no. 3
Type: Research Article
ISSN: 1738-2122

Keywords

Article
Publication date: 8 August 2016

Jakiul Hassan, Premkumar Thodi and Faisal Khan

– The purpose of this paper is to propose a state dependent stochastic Markov model for availability analysis of process plant instead of traditional time dependent model.

Abstract

Purpose

The purpose of this paper is to propose a state dependent stochastic Markov model for availability analysis of process plant instead of traditional time dependent model.

Design/methodology/approach

The traditional concepts of system performance measurement and reliability (namely, binary; two-state concepts) are observed to be inadequate to characterize performance of complex system components. Availability analysis considering an intermediate state, such as a degraded state, provides a better alternative mechanism for system performance mapping. The availability model provides a better assessment of failure and repair characteristics for equipment in the sub-system and its overall performance. In addition to availability analysis, this paper also discusses the preventive maintenance (PM) program to achieve target availability. In this model, the degraded state is considered as a PM state. Using Markov analysis the optimum maintenance interval is determined.

Findings

Markov process provides an easier way to measure the performance of the process facility. This study also revealed that the maintenance interval has a major influence in the availability of a process facility as well as in maintaining target availability. The developed model is also applicable to the varying target availability as well as having the capability to handle even the reconfigured process systems.

Research limitations/implications

Considering the degraded state as an operative state, a higher availability of the plant is predicted. The consideration of the degraded state of the system makes the availability estimation more realistic and acceptable. Availability quantification, target availability allocation and a PM model are exemplified in a sub-system of an liquefied natural gas facility.

Originality/value

The unique features of the present study are; Markov modeling approach integrating availability and PM; optimum PM interval determination of stochastically degrading components based on target availability; consideration of three-state systems; and consideration of increasing failure rates.

Details

Journal of Quality in Maintenance Engineering, vol. 22 no. 3
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 18 January 2008

S. Thomas Ng, Yuan Fang and Onuegbu O. Ugwu

The purpose of this paper is to examine the potential of applying Petri nets to improve construction material logistics analysis and modelling.

1348

Abstract

Purpose

The purpose of this paper is to examine the potential of applying Petri nets to improve construction material logistics analysis and modelling.

Design/methodology/approach

The characteristics of construction logistics are unveiled by analysing the existing practices of logistics management. In views of the dynamic nature of construction logistics problem, a stochastic Petri nets (SPNs) approach is proposed to tackle the time‐evolution property. Using a simulation package called PetriTool™ a simulation model is developed. Finally, a case example is applied to illustrate the way in which SPNs is used for analysing and modelling construction material logistics problems.

Findings

The results indicate that the impacts triggered by variations in delivery lead‐time and changes in delivery quantities can be approximated thereby facilitating decision makers to devise a more reliable and optimal materials management plan for construction projects.

Research limitations/implications

The complex routing patterns in demand analysis and materials procurement methods that results in the enlarged supply chains have not been considered in this paper.

Practical implications

The lack of a simple but powerful formalism to analyse and model the decision process under a dynamic environment hinders the implementation of efficient logistics systems in the construction industry. The SPNs model presented in this paper can support planners and managers in making construction logistics management decisions under dynamic environment.

Originality/value

This paper demonstrates that the time‐based SPNs can offer more enriched solutions especially when modelling the time‐evolution behaviours of construction logistics.

Details

Construction Innovation, vol. 8 no. 1
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 12 December 2022

Afshin Yaghoubi and Seyed Taghi Akhavan Niaki

One of the common approaches to improve systems reliability is using standby redundancy. Although many works are available in the literature on the applications of standby…

Abstract

Purpose

One of the common approaches to improve systems reliability is using standby redundancy. Although many works are available in the literature on the applications of standby redundancy, the system components are assumed to be independent of each other. But, in reality, the system components can be dependent on one another, causing the failure of each component to affect the failure rate of the remaining active components. In this paper, a standby two-unit system is considered, assuming a dependency between the switch and its associated active component.

Design/methodology/approach

This paper assumes that the failures between the switch and its associated active component follow the Marshall–Olkin exponential bivariate exponential distribution. Then, the reliability analysis of the system is done using the continuous-time Markov chain method.

Findings

The derived equations application to determine the system steady-state availability, system reliability and sensitivity analysis on the mean time to failure is demonstrated using a numerical illustration.

Originality/value

All previous models assumed independency between the switch and the associated active unit in the standby redundancy approach. In this paper, the switch and its associated component are assumed to be dependent on each other.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

Abstract

Details

Transport Science and Technology
Type: Book
ISBN: 978-0-08-044707-0

Article
Publication date: 11 July 2023

Youssef El-Khatib and Abdulnasser Hatemi-J

The current paper proposes a prediction model for a cryptocurrency that encompasses three properties observed in the markets for cryptocurrencies—namely high volatility…

Abstract

Purpose

The current paper proposes a prediction model for a cryptocurrency that encompasses three properties observed in the markets for cryptocurrencies—namely high volatility, illiquidity, and regime shifts. As far as the authors’ knowledge extends, this paper is the first attempt to introduce a stochastic differential equation (SDE) for pricing cryptocurrencies while explicitly integrating the mentioned three significant stylized facts.

Design/methodology/approach

Cryptocurrencies are increasingly utilized by investors and financial institutions worldwide as an alternative means of exchange. To the authors’ best knowledge, there is no SDE in the literature that can be used for representing and evaluating the data-generating process for the price of a cryptocurrency.

Findings

By using Ito calculus, the authors provide a solution for the suggested SDE along with mathematical proof. Numerical simulations are performed and compared to the real data, which seems to capture the dynamics of the price path of two main cryptocurrencies in the real markets.

Originality/value

The stochastic differential model that is introduced and solved in this article is expected to be useful for the pricing of cryptocurrencies in situations of high volatility combined with structural changes and illiquidity. These attributes are apparent in the real markets for cryptocurrencies; therefore, accounting explicitly for these underlying characteristics is a necessary condition for accurate evaluation of cryptocurrencies.

Details

Journal of Economic Studies, vol. 51 no. 2
Type: Research Article
ISSN: 0144-3585

Keywords

1 – 10 of 12