Search results

21 – 30 of over 5000
Article
Publication date: 3 January 2017

Shuyuan Liu and Tat L. Chan

The purpose of this paper is to study the complex aerosol dynamic processes by using this newly developed stochastically weighted operator splitting Monte Carlo (SWOSMC) method.

Abstract

Purpose

The purpose of this paper is to study the complex aerosol dynamic processes by using this newly developed stochastically weighted operator splitting Monte Carlo (SWOSMC) method.

Design/methodology/approach

Stochastically weighted particle method and operator splitting method are coupled to formulate the SWOSMC method for the numerical simulation of particle-fluid systems undergoing the complex simultaneous processes.

Findings

This SWOSMC method is first validated by comparing its numerical simulation results of constant rate coagulation and linear rate condensation with the corresponding analytical solutions. Coagulation and nucleation cases are further studied whose results are compared with the sectional method in excellent agreement. This SWOSMC method has also demonstrated its high numerical simulation capability when used to deal with simultaneous aerosol dynamic processes including coagulation, nucleation and condensation.

Originality/value

There always exists conflict and tradeoffs between computational cost and accuracy for Monte Carlo-based methods for the numerical simulation of aerosol dynamics. The operator splitting method has been widely used in solving complex partial differential equations, while the stochastic-weighted particle method has been commonly used in numerical simulation of aerosol dynamics. However, the integration of these two methods has not been well investigated.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 27 no. 1
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 1 July 2014

Sekar Vinodh and Gopinath Rathod

– The purpose of this paper is to present an integrated technical and economic model to evaluate the reusability of products or components.

Abstract

Purpose

The purpose of this paper is to present an integrated technical and economic model to evaluate the reusability of products or components.

Design/methodology/approach

Life cycle assessment (LCA) methodology is applied to obtain the product’s environmental performance. Monte Carlo simulation is utilized for enabling sustainable product design.

Findings

The results show that the model is capable of assessing the potential reusability of used products, while the usage of simulation significantly increases the effectiveness of the model in addressing uncertainties.

Research limitations/implications

The case study has been conducted in a single manufacturing organization. The implications derived from the study are found to be practical and useful to the organization.

Practical implications

The paper reports a case study carried out for an Indian rotary switches manufacturing organization. Hence, the model is practically feasible.

Originality/value

The article presents a study that investigates LCA and simulation as enablers of sustainable product design. Hence, the contributions of this article are original and valuable.

Details

Journal of Engineering, Design and Technology, vol. 12 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 12 February 2021

Abroon Qazi and Mecit Can Emre Simsekler

This paper aims to develop a process for prioritizing project risks that integrates the decision-maker's risk attitude, uncertainty about risks both in terms of the associated…

1256

Abstract

Purpose

This paper aims to develop a process for prioritizing project risks that integrates the decision-maker's risk attitude, uncertainty about risks both in terms of the associated probability and impact ratings, and correlations across risk assessments.

Design/methodology/approach

This paper adopts a Monte Carlo Simulation-based approach to capture the uncertainty associated with project risks. Risks are prioritized based on their relative expected utility values. The proposed process is operationalized through a real application in the construction industry.

Findings

The proposed process helped in identifying low-probability, high-impact risks that were overlooked in the conventional risk matrix-based prioritization scheme. While considering the expected risk exposure of individual risks, none of the risks were located in the high-risk exposure zone; however, the proposed Monte Carlo Simulation-based approach revealed risks with a high probability of occurrence in the high-risk exposure zone. Using the expected utility-based approach alone in prioritizing risks may lead to ignoring few critical risks, which can only be captured through a rigorous simulation-based approach.

Originality/value

Monte Carlo Simulation has been used to aggregate the risk matrix-based data and disaggregate and map the resulting risk profiles with underlying distributions. The proposed process supported risk prioritization based on the decision-maker's risk attitude and identified low-probability, high-impact risks and high-probability, high-impact risks.

Details

International Journal of Managing Projects in Business, vol. 14 no. 5
Type: Research Article
ISSN: 1753-8378

Keywords

Article
Publication date: 7 September 2015

Clarissa Ai Ling Lee

The purpose of this paper is to recuperate Heinz von Foerster’s “Quantum Mechanical Theory of Memory” from Cybernetics: Circular, Causal, and Feedback Mechanisms in Biological and

Abstract

Purpose

The purpose of this paper is to recuperate Heinz von Foerster’s “Quantum Mechanical Theory of Memory” from Cybernetics: Circular, Causal, and Feedback Mechanisms in Biological and Social Systems and John von Neumann’s The Computer and the Brain for present-day, and future, applications in biophysics, theories of information and cognition, and quantum theories; the main objective is to ground cybernetic theory for a critical evaluation of the historical evolution of the Monte Carlo method, with potential for application to quantum computing.

Design/methodology/approach

Close-reading of selected texts, historiography, and case studies in current developments in the Monte Carlo method of high-energy particle physics (HEP) for developing a platform for bridging the apparently incommensurable differences between the physical-mathematical and the biological sciences.

Findings

First, usefulness of the cybernetic approach for historicizing the Monte Carlo method in relation to digital computing and quantum physics. Second, development of an inter/trans-disciplinary approach to the hard sciences through a critical re-evaluation of the historical texts of von Foerster and von Neumann for application to developments in quantum theory, biophysics, and computing.

Research limitations/implications

This work is largely theoretical and uses dialectical thought experiments to engage between sciences operating across different ontological scales.

Practical implications

Consideration of developments of quantum computing and how that would change one’s perception of information, data, and the way in which analysis is currently performed with big data.

Originality/value

This is the first time that von Neumann and von Foerster have been contrasted and compared in relation to their epistemic compatibility, historical importance, and relevance for producing a creative approach to current scientific epistemology. This paper hopes to change how the authors view trans-disciplinary/inter-disciplinary practices in the sciences and produce new vistas of thought in the history and philosophy of science.

Details

Kybernetes, vol. 44 no. 8/9
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 23 November 2012

Sami J. Habib and Paulvanna N. Marimuthu

Energy constraint is always a serious issue in wireless sensor networks, as the energy possessed by the sensors is limited and non‐renewable. Data aggregation at intermediate base…

Abstract

Purpose

Energy constraint is always a serious issue in wireless sensor networks, as the energy possessed by the sensors is limited and non‐renewable. Data aggregation at intermediate base stations increases the lifespan of the sensors, whereby the sensors' data are aggregated before being communicated to the central server. This paper proposes a query‐based aggregation within Monte Carlo simulator to explore the best and worst possible query orders to aggregate the sensors' data at the base stations. The proposed query‐based aggregation model can help the network administrator to envisage the best query orders in improving the performance of the base stations under uncertain query ordering. Furthermore, it aims to examine the feasibility of the proposed model to engage simultaneous transmissions at the base station and also to derive a best‐fit mathematical model to study the behavior of data aggregation with uncertain querying order.

Design/methodology/approach

The paper considers small and medium‐sized wireless sensor networks comprised of randomly deployed sensors in a square arena. It formulates the query‐based data aggregation problem as an uncertain ordering problem within Monte Carlo simulator, generating several thousands of uncertain orders to schedule the responses of M sensors at the base station within the specified time interval. For each selected time interval, the model finds the best possible querying order to aggregate the data with reduced idle time and with improved throughput. Furthermore, it extends the model to include multiple sensing parameters and multiple aggregating channels, thereby enabling the administrator to plan the capacity of its WSN according to specific time intervals known in advance.

Findings

The experimental results within Monte Carlo simulator demonstrate that the query‐based aggregation scheme show a better trade‐off in maximizing the aggregating efficiency and also reducing the average idle‐time experienced by the individual sensor. The query‐based aggregation model was tested for a WSN containing 25 sensors with single sensing parameter, transmitting data to a base station; moreover, the simulation results show continuous improvement in best‐case performances from 56 percent to 96 percent in the time interval of 80 to 200 time units. Moreover, the query aggregation is extended to analyze the behavior of WSN with 50 sensors, sensing two environmental parameters and base station equipped with multiple channels, whereby it demonstrates a shorter aggregation time interval against single channel. The analysis of average waiting time of individual sensors in the generated uncertain querying order shows that the best‐case scenario within a specified time interval showed a gain of 10 percent to 20 percent over the worst‐case scenario, which reduces the total transmission time by around 50 percent.

Practical implications

The proposed query‐based data aggregation model can be utilized to predict the non‐deterministic real‐time behavior of the wireless sensor network in response to the flooded queries by the base station.

Originality/value

This paper employs a novel framework to analyze all possible ordering of sensor responses to be aggregated at the base station within the stipulated aggregating time interval.

Details

International Journal of Pervasive Computing and Communications, vol. 8 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 7 September 2010

Ruth Banomyong and Apichat Sopadang

The purpose of this paper is to provide a framework for the development of emergency logistics response models. The proposition of a conceptual framework is in itself not…

3313

Abstract

Purpose

The purpose of this paper is to provide a framework for the development of emergency logistics response models. The proposition of a conceptual framework is in itself not sufficient and simulation models are further needed in order to help emergency logistics decision makers in refining their preparedness planning process.

Design/methodology/approach

The paper presents a framework proposition with illustrative case study.

Findings

The use of simulation modelling can help enhance the reliability and validity of developed emergency response model.

Research limitations/implications

The emergency response model outcomes are still based on simulated outputs and would still need to be validated in a real‐life environment. Proposing a new or revised emergency logistics response model is not sufficient. Developed logistics response models need to be further validated and simulation modelling can help enhance validity.

Practical implications

Emergency logistics decision makers can make better informed decisions based on simulation model output and can further refine their decision‐making capability.

Originality/value

The paper posits the contribution of simulation modelling as part of the framework for developing and refining emergency logistics response.

Details

International Journal of Physical Distribution & Logistics Management, vol. 40 no. 8/9
Type: Research Article
ISSN: 0960-0035

Keywords

Book part
Publication date: 30 August 2019

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Article
Publication date: 27 December 2022

John O'Neill, Barry Bloom and Khoa Tang

The purpose of this paper is to be the first empirical article to provide necessary standard deviation inputs for adoption in probabilistic prognostications of hotel revenues and…

Abstract

Purpose

The purpose of this paper is to be the first empirical article to provide necessary standard deviation inputs for adoption in probabilistic prognostications of hotel revenues and expenses, i.e. prognostications that consider risk. Commonly accepted methodologies to develop hotel financial projections resulting in point estimates of upcoming performance have been perceived as egregiously insufficient because they do not consider risk in lodging investments. Previous research has recommended the use of probabilistic methodologies to address this concern, and it has been recommended that analysts use Monte Carlo simulation. This methodology requires the estimation of standard deviations of specific, future hotel revenue and expense items, and this paper provides such inputs based on a large sample of actual, recent data.

Design/methodology/approach

This study provides actual standard deviations using a sample of recent hotel profit and loss (P&L) statements for over 3,000 hotels (Over 19,000 P&L statements) to provide analysts with empirically-supported standard deviations that may be applied to Uniform System of Accounts for the Lodging Industry (USALI) hotel revenues and expenses in hotel financial (revenue and expense) prognostications.

Findings

Findings are presented for standard deviations based on typical line items as defined in the USALI, and these findings may be used by practitioners as inputs for hotel financial projections. Findings also include that hotel revenue items generally have higher standard deviations than expense items. Findings are presented in detail in the manuscript, including overall findings, as well as findings based on hotel class.

Practical implications

Rather than practitioners adopting standard deviations of hotel revenue and expense line items based on guesswork or judgment, which is the current “state of the art” in hotel financial projections, this paper provides practitioners with actual standard deviations which may be adopted in probabilistic prognostications of hotel revenues and expenses.

Originality/value

This paper may be the first to provide practitioners with actual standard deviations, based on typical USALI line items, for adoption in probabilistic prognostications of hotel revenues and expenses.

Details

Journal of Hospitality and Tourism Insights, vol. 6 no. 5
Type: Research Article
ISSN: 2514-9792

Keywords

Article
Publication date: 1 February 2002

ZHENG WANG

Incremental value‐at‐risk (VaR) is used to measure the effectiveness of diversification. However, the statistical properties of the estimated incremental VaR have not been fully…

Abstract

Incremental value‐at‐risk (VaR) is used to measure the effectiveness of diversification. However, the statistical properties of the estimated incremental VaR have not been fully explored. In this article, the author compares incremental VaR with other VaR‐based risk measures. The article derives the exact distribution of the estimated incremental VaR, when obtained using Monte Carlo simulation. The approach obtains general results with the implication that incremental VaR is dependent on the simulation method.

Details

The Journal of Risk Finance, vol. 3 no. 3
Type: Research Article
ISSN: 1526-5943

Article
Publication date: 16 August 2011

Akihiro Fukushima

The purpose of this paper is to propose two hybrid forecasting models which integrate available ones. A hybrid contaminated normal distribution (CND) model accurately reflects the…

Abstract

Purpose

The purpose of this paper is to propose two hybrid forecasting models which integrate available ones. A hybrid contaminated normal distribution (CND) model accurately reflects the non‐normal features of monthly S&P 500 index returns, and a hybrid GARCH model captures a serial correlation with respect to volatility. The hybrid GARCH model potentially enables financial institutions to evaluate long‐term investment risks in the S&P 500 index more accurately than current models.

Design/methodology/approach

The probability distribution of an expected investment outcome is generated with a Monte Carlo simulation. A taller peak and fatter tails (kurtosis), which the probability distribution of monthly S&P 500 index returns contains, is produced by integrating a CND model and a bootstrapping model. The serial correlation of volatilities is simulated by applying a GARCH model.

Findings

The hybrid CND model can simulate the non‐normality of monthly S&P 500 index returns, while avoiding the influence of discrete observations. The hybrid GARCH model, by contrast, can simulate the serial correlation of S&P 500 index volatilities, while generating fatter tails. Long‐term investment risks in the S&P 500 index are affected by the serial correlation of volatilities, not the non‐normality of returns.

Research limitations/implications

The hybrid models are applied only to the S&P 500 index. Cross‐sectional correlations among different asset groups are not examined.

Originality/value

The proposed hybrid models are unique because they combine available ones with a decision tree algorithm. In addition, the paper clearly explains the strengths and weaknesses of existing forecasting models.

Details

The Journal of Risk Finance, vol. 12 no. 4
Type: Research Article
ISSN: 1526-5943

Keywords

21 – 30 of over 5000