Search results

1 – 10 of over 1000
Book part
Publication date: 19 November 2014

Garland Durham and John Geweke

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…

Abstract

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.

Article
Publication date: 21 May 2020

Osman Hürol Türkakın, Ekrem Manisalı and David Arditi

In smaller projects with limited resources, schedule updates are often not performed. In these situations, traditional delay analysis methods cannot be used as they all require…

Abstract

Purpose

In smaller projects with limited resources, schedule updates are often not performed. In these situations, traditional delay analysis methods cannot be used as they all require updated schedules. The objective of this study is to develop a model that performs delay analysis by using only an as-planned schedule and the expense records kept on site.

Design/methodology/approach

This study starts out by developing an approach that estimates activity duration ranges in a network schedule by using as-planned and as-built s-curves. Monte Carlo simulation is performed to generate candidate as-built schedules using these activity duration ranges. If necessary, the duration ranges are refined by a follow-up procedure that systematically relaxes the ranges and develops new as-built schedules. The candidate schedule that has the closest s-curve to the actual s-curve is considered to be the most realistic as-built schedule. Finally, the as-planned vs. as-built delay analysis method is performed to determine which activity(ies) caused project delay. This process is automated using Matlab. A test case is used to demonstrate that the proposed automated method can work well.

Findings

The automated process developed in this study has the capability to develop activity duration ranges, perform Monte Carlo simulation, generate a large number of candidate as-built schedules, build s-curves for each of the candidate schedules and identify the most realistic one that has an s-curve that is closest to the actual as-built s-curve. The test case confirmed that the proposed automated system works well as it resulted in an as-built schedule that has an s-curve that is identical to the actual as-built s-curve. To develop an as-built schedule using this method is a reasonable way to make a case in or out of a court of law.

Research limitations/implications

Practitioners specifying activity ranges to perform Monte Carlo simulation can be characterized as subjective and perhaps arbitrary. To minimize the effects of this limitation, this study proposes a method that determines duration ranges by comparing as-built and as-planned cash-flows, and then by systematically modifying the search space. Another limitation is the assumption that the precedence logic in the as-planned network remains the same throughout construction. Since updated schedules are not available in the scenario considered in this study, and since in small projects the logic relationships are fairly stable over the short project duration, the assumption of a stable logic throughout construction may be reasonable, but this issue needs to be explored further in future research.

Practical implications

Delays are common in construction projects regardless of the size of the project. The critical path method (CPM) schedules of many smaller projects, especially in developing countries, are not updated during construction. In case updated schedules are not available, the method presented in this paper represents an automated, practical and easy-to-use tool that allows parties to a contract to perform delay analysis with only an as-planned schedule and the expense logs kept on site.

Originality/value

Since an as-built schedule cannot be built without updated schedules, and since the absence of an as-built schedule precludes the use of any delay analysis method that is acceptable in courts of law, using the method presented in this paper may very well be the only solution to the problem.

Details

Engineering, Construction and Architectural Management, vol. 27 no. 10
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 5 March 2018

Pengbo Wang and Jingxuan Wang

Uncertainty is ubiquitous in practical engineering and scientific research. The uncertainties in parameters can be treated as interval numbers. The prediction of upper and lower…

Abstract

Purpose

Uncertainty is ubiquitous in practical engineering and scientific research. The uncertainties in parameters can be treated as interval numbers. The prediction of upper and lower bounds of the response of a system including uncertain parameters is of immense significance in uncertainty analysis. This paper aims to evaluate the upper and lower bounds of electric potentials in an electrostatic system efficiently with interval parameters.

Design/methodology/approach

The Taylor series expansion is proposed for evaluating the upper and lower bounds of electric potentials in an electrostatic system with interval parameters. The uncertain parameters of the electrostatic system are represented by interval notations. By performing Taylor series expansion on the electric potentials obtained using the equilibrium governing equation and by using the properties of interval mathematics, the upper and lower bounds of the electric potentials of an electrostatic system can be calculated.

Findings

To evaluate the accuracy and efficiency of the proposed method, the upper and lower bounds of the electric potentials and the computation time of the proposed method are compared with those obtained using the Monte Carlo simulation, which is referred to as a reference solution. Numerical examples illustrate that the bounds of electric potentials of this method are consistent with those obtained using the Monte Carlo simulation. Moreover, the proposed method is significantly more time-saving.

Originality/value

This paper provides a rapid computational method to estimate the upper and lower bounds of electric potentials in electrostatics analysis with interval parameters. The precision of the proposed method is acceptable for engineering applications, and the computation time of the proposed method is significantly less than that of the Monte Carlo simulation, which is the most widely used method related to uncertainties. The Monte Carlo simulation requires a large number of samplings, and this leads to significant runtime consumption.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 37 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 7 September 2015

Clarissa Ai Ling Lee

The purpose of this paper is to recuperate Heinz von Foerster’s “Quantum Mechanical Theory of Memory” from Cybernetics: Circular, Causal, and Feedback Mechanisms in Biological and

Abstract

Purpose

The purpose of this paper is to recuperate Heinz von Foerster’s “Quantum Mechanical Theory of Memory” from Cybernetics: Circular, Causal, and Feedback Mechanisms in Biological and Social Systems and John von Neumann’s The Computer and the Brain for present-day, and future, applications in biophysics, theories of information and cognition, and quantum theories; the main objective is to ground cybernetic theory for a critical evaluation of the historical evolution of the Monte Carlo method, with potential for application to quantum computing.

Design/methodology/approach

Close-reading of selected texts, historiography, and case studies in current developments in the Monte Carlo method of high-energy particle physics (HEP) for developing a platform for bridging the apparently incommensurable differences between the physical-mathematical and the biological sciences.

Findings

First, usefulness of the cybernetic approach for historicizing the Monte Carlo method in relation to digital computing and quantum physics. Second, development of an inter/trans-disciplinary approach to the hard sciences through a critical re-evaluation of the historical texts of von Foerster and von Neumann for application to developments in quantum theory, biophysics, and computing.

Research limitations/implications

This work is largely theoretical and uses dialectical thought experiments to engage between sciences operating across different ontological scales.

Practical implications

Consideration of developments of quantum computing and how that would change one’s perception of information, data, and the way in which analysis is currently performed with big data.

Originality/value

This is the first time that von Neumann and von Foerster have been contrasted and compared in relation to their epistemic compatibility, historical importance, and relevance for producing a creative approach to current scientific epistemology. This paper hopes to change how the authors view trans-disciplinary/inter-disciplinary practices in the sciences and produce new vistas of thought in the history and philosophy of science.

Details

Kybernetes, vol. 44 no. 8/9
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 23 November 2012

Sami J. Habib and Paulvanna N. Marimuthu

Energy constraint is always a serious issue in wireless sensor networks, as the energy possessed by the sensors is limited and non‐renewable. Data aggregation at intermediate base…

Abstract

Purpose

Energy constraint is always a serious issue in wireless sensor networks, as the energy possessed by the sensors is limited and non‐renewable. Data aggregation at intermediate base stations increases the lifespan of the sensors, whereby the sensors' data are aggregated before being communicated to the central server. This paper proposes a query‐based aggregation within Monte Carlo simulator to explore the best and worst possible query orders to aggregate the sensors' data at the base stations. The proposed query‐based aggregation model can help the network administrator to envisage the best query orders in improving the performance of the base stations under uncertain query ordering. Furthermore, it aims to examine the feasibility of the proposed model to engage simultaneous transmissions at the base station and also to derive a best‐fit mathematical model to study the behavior of data aggregation with uncertain querying order.

Design/methodology/approach

The paper considers small and medium‐sized wireless sensor networks comprised of randomly deployed sensors in a square arena. It formulates the query‐based data aggregation problem as an uncertain ordering problem within Monte Carlo simulator, generating several thousands of uncertain orders to schedule the responses of M sensors at the base station within the specified time interval. For each selected time interval, the model finds the best possible querying order to aggregate the data with reduced idle time and with improved throughput. Furthermore, it extends the model to include multiple sensing parameters and multiple aggregating channels, thereby enabling the administrator to plan the capacity of its WSN according to specific time intervals known in advance.

Findings

The experimental results within Monte Carlo simulator demonstrate that the query‐based aggregation scheme show a better trade‐off in maximizing the aggregating efficiency and also reducing the average idle‐time experienced by the individual sensor. The query‐based aggregation model was tested for a WSN containing 25 sensors with single sensing parameter, transmitting data to a base station; moreover, the simulation results show continuous improvement in best‐case performances from 56 percent to 96 percent in the time interval of 80 to 200 time units. Moreover, the query aggregation is extended to analyze the behavior of WSN with 50 sensors, sensing two environmental parameters and base station equipped with multiple channels, whereby it demonstrates a shorter aggregation time interval against single channel. The analysis of average waiting time of individual sensors in the generated uncertain querying order shows that the best‐case scenario within a specified time interval showed a gain of 10 percent to 20 percent over the worst‐case scenario, which reduces the total transmission time by around 50 percent.

Practical implications

The proposed query‐based data aggregation model can be utilized to predict the non‐deterministic real‐time behavior of the wireless sensor network in response to the flooded queries by the base station.

Originality/value

This paper employs a novel framework to analyze all possible ordering of sensor responses to be aggregated at the base station within the stipulated aggregating time interval.

Details

International Journal of Pervasive Computing and Communications, vol. 8 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 1 April 1994

Henry Sheng, Roberto Guerrieri and Alberto Sangiovanni‐Vincentelli

We present a generalized self‐scattering method for generating carrier free flight times in Monte Carlo simulation. Compared to traditional approaches, the added flexibility of…

Abstract

We present a generalized self‐scattering method for generating carrier free flight times in Monte Carlo simulation. Compared to traditional approaches, the added flexibility of this approach results in fewer fictitious scatterings, which is especially appealing for load balance and efficiency when a SIMD parallel computer is used. Speedups from 19% to 69% over an optimized variable‐Γ approach are shown for an implementation on the Connection Machine CM‐2. The performance sensitivities to applied fields and grid spacings are also presented. The conversion of existing variable‐Γ software to this new approach requires only a few changes.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 13 no. 4
Type: Research Article
ISSN: 0332-1649

Article
Publication date: 1 November 2001

J.G. Marakis, J. Chamiço, G. Brenner and F. Durst

Notes that, in a full‐scale application of the Monte Carlo method for combined heat transfer analysis, problems usually arise from the large computing requirements. Here the…

Abstract

Notes that, in a full‐scale application of the Monte Carlo method for combined heat transfer analysis, problems usually arise from the large computing requirements. Here the method to overcome this difficulty is the parallel execution of the Monte Carlo method in a distributed computing environment. Addresses the problem of determination of the temperature field formed under the assumption of radiative equilibrium in an enclosure idealizing an industrial furnace. The medium contained in this enclosure absorbs, emits and scatters anisotropically thermal radiation. Discusses two topics in detail: first, the efficiency of the parallelization of the developed code, and second, the influence of the scattering behavior of the medium. The adopted parallelization method for the first topic is the decomposition of the statistical sample and its subsequent distribution among the available processors. The measured high efficiencies showed that this method is particularly suited to the target architecture of this study, which is a dedicated network of workstations supporting the message passing paradigm. For the second topic, the results showed that taking into account the isotropic scattering, as opposed to neglecting the scattering, has a pronounced impact on the temperature distribution inside the enclosure. In contrast, the consideration of the sharply forward scattering, that is characteristic of all the real combustion particles, leaves the predicted temperature field almost undistinguishable from the absorbing/emitting case.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 11 no. 7
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 21 May 2021

Mohammad Raoufi and Aminah Robinson Fayek

This paper aims to cover the development of a methodology for hybrid fuzzy Monte Carlo agent-based simulation (FMCABS) and its implementation on a parametric study of construction…

Abstract

Purpose

This paper aims to cover the development of a methodology for hybrid fuzzy Monte Carlo agent-based simulation (FMCABS) and its implementation on a parametric study of construction crew performance.

Design/methodology/approach

The developed methodology uses fuzzy logic, Monte Carlo simulation and agent-based modeling to simulate the behavior of construction crews and predict their performance. Both random and subjective uncertainties are considered in model variables.

Findings

The developed methodology was implemented on a real case involving the parametric study of construction crew performance to assess its applicability and suitability for this context.

Research limitations/implications

This parametric study demonstrates a practical application for the hybrid FMCABS methodology. Though findings from this study are limited to the context of construction crew motivation and performance, the applicability of the developed methodology extends beyond the construction domain.

Practical implications

This paper will help construction practitioners to predict and improve crew performance by taking into account both random and subjective uncertainties.

Social implications

This paper will advance construction modeling by allowing for the assessment of social interactions among crews and their effects on crew performance.

Originality/value

The developed hybrid FMCABS methodology represents an original contribution, as it allows agent-based models to simultaneously process all types of variables (i.e. deterministic, random and subjective) in the same simulation experiment while accounting for interactions among different agents. In addition, the developed methodology is implemented in a novel and extensive parametric study of construction crew performance.

Content available
Article
Publication date: 18 May 2023

Adam Biggs, Greg Huffman, Joseph Hamilton, Ken Javes, Jacob Brookfield, Anthony Viggiani, John Costa and Rachel R. Markwald

Marksmanship data is a staple of military and law enforcement evaluations. This ubiquitous nature creates a critical need to use all relevant information and to convey outcomes in…

Abstract

Purpose

Marksmanship data is a staple of military and law enforcement evaluations. This ubiquitous nature creates a critical need to use all relevant information and to convey outcomes in a meaningful way for the end users. The purpose of this study is to demonstrate how simple simulation techniques can improve interpretations of marksmanship data.

Design/methodology/approach

This study uses three simulations to demonstrate the advantages of small arms combat modeling, including (1) the benefits of incorporating a Markov Chain into Monte Carlo shooting simulations; (2) how small arms combat modeling is superior to point-based evaluations; and (3) why continuous-time chains better capture performance than discrete-time chains.

Findings

The proposed method reduces ambiguity in low-accuracy scenarios while also incorporating a more holistic view of performance as outcomes simultaneously incorporate speed and accuracy rather than holding one constant.

Practical implications

This process determines the probability of winning an engagement against a given opponent while circumventing arbitrary discussions of speed and accuracy trade-offs. Someone wins 70% of combat engagements against a given opponent rather than scoring 15 more points. Moreover, risk exposure is quantified by determining the likely casualties suffered to achieve victory. This combination makes the practical consequences of human performance differences tangible to the end users. Taken together, this approach advances the operations research analyses of squad-level combat engagements.

Originality/value

For more than a century, marksmanship evaluations have used point-based systems to classify shooters. However, these scoring methods were developed for competitive integrity rather than lethality as points do not adequately capture combat capabilities. The proposed method thus represents a major shift in the marksmanship scoring paradigm.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Abstract

This article surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by vector autoregressions. Specific emphasis is placed on highlighting those parts of the existing literature that are applicable to direct multistep forecasts and those parts that are applicable to iterated multistep forecasts. This literature includes advancements in the evaluation of forecasts in population (based on true, unknown model coefficients) and the evaluation of forecasts in the finite sample (based on estimated model coefficients). The article then examines in Monte Carlo experiments the finite-sample properties of some tests of equal forecast accuracy, focusing on the comparison of VAR forecasts to AR forecasts. These experiments show the tests to behave as should be expected given the theory. For example, using critical values obtained by bootstrap methods, tests of equal accuracy in population have empirical size about equal to nominal size.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

1 – 10 of over 1000