Search results

1 – 10 of over 30000
Content available
Article
Publication date: 18 May 2023

Adam Biggs, Greg Huffman, Joseph Hamilton, Ken Javes, Jacob Brookfield, Anthony Viggiani, John Costa and Rachel R. Markwald

Marksmanship data is a staple of military and law enforcement evaluations. This ubiquitous nature creates a critical need to use all relevant information and to convey outcomes in…

Abstract

Purpose

Marksmanship data is a staple of military and law enforcement evaluations. This ubiquitous nature creates a critical need to use all relevant information and to convey outcomes in a meaningful way for the end users. The purpose of this study is to demonstrate how simple simulation techniques can improve interpretations of marksmanship data.

Design/methodology/approach

This study uses three simulations to demonstrate the advantages of small arms combat modeling, including (1) the benefits of incorporating a Markov Chain into Monte Carlo shooting simulations; (2) how small arms combat modeling is superior to point-based evaluations; and (3) why continuous-time chains better capture performance than discrete-time chains.

Findings

The proposed method reduces ambiguity in low-accuracy scenarios while also incorporating a more holistic view of performance as outcomes simultaneously incorporate speed and accuracy rather than holding one constant.

Practical implications

This process determines the probability of winning an engagement against a given opponent while circumventing arbitrary discussions of speed and accuracy trade-offs. Someone wins 70% of combat engagements against a given opponent rather than scoring 15 more points. Moreover, risk exposure is quantified by determining the likely casualties suffered to achieve victory. This combination makes the practical consequences of human performance differences tangible to the end users. Taken together, this approach advances the operations research analyses of squad-level combat engagements.

Originality/value

For more than a century, marksmanship evaluations have used point-based systems to classify shooters. However, these scoring methods were developed for competitive integrity rather than lethality as points do not adequately capture combat capabilities. The proposed method thus represents a major shift in the marksmanship scoring paradigm.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Content available
Article
Publication date: 4 October 2018

Denis S. Clayson, Alfred E. Thal, Jr and Edward D. White III

The purpose of this study was to investigate the stability of the cost performance index (CPI) for environmental remediation projects as the topic is not addressed in the…

1023

Abstract

Purpose

The purpose of this study was to investigate the stability of the cost performance index (CPI) for environmental remediation projects as the topic is not addressed in the literature. CPI is defined as the earned value of work performed divided by the actual cost of the work, and CPI stability represents the point in time in a project after which the CPI varies by less than 20 percent (measured in different ways).

Design/methodology/approach

After collecting monthly earned value management (EVM) data for 136 environmental remediation projects from a United States federal agency in fiscal years 2012 and 2013, the authors used the nonparametric Wilcoxon signed-rank test to analyze CPI stability. The authors also used nonparametric statistical comparisons to identify any significant relationships between CPI stability and independent variables representing project and contract characteristics.

Findings

The CPI for environmental projects did not stabilize until the projects were 41 percent complete with respect to project duration. The most significant factors contributing to CPI stability were categorized into the following managerial insights: contractor qualifications, communication, stakeholder engagement, contracting strategy, competition, EVM factors, and macro project factors.

Originality/value

As CPI stability for environmental remediation projects has not been reported in the literature, this research provides new insights to help project managers understand when the CPIs of environmental remediation projects stabilize and which factors have the most impact on CPI stability.

Details

Journal of Defense Analytics and Logistics, vol. 2 no. 2
Type: Research Article
ISSN: 2399-6439

Keywords

Content available
Article
Publication date: 1 March 2013

Narendra C. Bhandari

This is the first study of its kind to explore the relationship between studentsʼ year of education and their intention to start a business once they have completed their…

1806

Abstract

This is the first study of its kind to explore the relationship between studentsʼ year of education and their intention to start a business once they have completed their undergraduate studies. The article also examines studentsʼ cumulative grade point average and their intention to start a business once they have completed their undergraduate studies.These pioneering findings are based on an extensive title review (including their summaries) of hundreds of articles related to these factors listed in EBSCO.

Details

New England Journal of Entrepreneurship, vol. 16 no. 1
Type: Research Article
ISSN: 2574-8904

Keywords

Content available
Article
Publication date: 3 December 2019

Pasquale Legato and Rina Mary Mazza

The use of queueing network models was stimulated by the appearance (1975) of the exact product form solution of a class of open, closed and mixed queueing networks obeying the…

2214

Abstract

Purpose

The use of queueing network models was stimulated by the appearance (1975) of the exact product form solution of a class of open, closed and mixed queueing networks obeying the local balance principle and solved, a few years later, by the popular mean value analysis algorithm (1980). Since then, research efforts have been produced to approximate solutions for non-exponential services and non-pure random mechanisms in customer processing and routing. The purpose of this paper is to examine the suitability of modeling choices and solution approaches consolidated in other domains with respect to two key logistic processes in container terminals.

Design/methodology/approach

In particular, the analytical solution of queueing networks is assessed for the vessel arrival-departure process and the container internal transfer process with respect to a real terminal of pure transshipment.

Findings

Numerical experiments show the extent to which a decomposition-based approximation, under fixed or state-dependent arrival rates, may be suitable for the approximate analysis of the queueing network models.

Research limitations/implications

The limitation of adopting exponential service time distributions and Poisson flows is highlighted.

Practical implications

Comparisons with a simulation-based solution deliver numerical evidence on the companion use of simulation in the daily practice of managing operations in a finite-time horizon under complex policies.

Originality/value

Discussion of some open modeling issues and encouraging results provide some guidelines on future research efforts and/or suitable adaption to container terminal logistics of the large body of techniques and algorithms available nowadays for supporting long-run decisions.

Details

Maritime Business Review, vol. 5 no. 1
Type: Research Article
ISSN: 2397-3757

Keywords

Open Access
Article
Publication date: 24 May 2024

Long Li, Binyang Chen and Jiangli Yu

The selection of sensitive temperature measurement points is the premise of thermal error modeling and compensation. However, most of the sensitive temperature measurement point…

Abstract

Purpose

The selection of sensitive temperature measurement points is the premise of thermal error modeling and compensation. However, most of the sensitive temperature measurement point selection methods do not consider the influence of the variability of thermal sensitive points on thermal error modeling and compensation. This paper considers the variability of thermal sensitive points, and aims to propose a sensitive temperature measurement point selection method and thermal error modeling method that can reduce the influence of thermal sensitive point variability.

Design/methodology/approach

Taking the truss robot as the experimental object, the finite element method is used to construct the simulation model of the truss robot, and the temperature measurement point layout scheme is designed based on the simulation model to collect the temperature and thermal error data. After the clustering of the temperature measurement point data is completed, the improved attention mechanism is used to extract the temperature data of the key time steps of the temperature measurement points in each category for thermal error modeling.

Findings

By comparing with the thermal error modeling method of the conventional fixed sensitive temperature measurement points, it is proved that the method proposed in this paper is more flexible in the processing of sensitive temperature measurement points and more stable in prediction accuracy.

Originality/value

The Grey Attention-Long Short Term Memory (GA-LSTM) thermal error prediction model proposed in this paper can reduce the influence of the variability of thermal sensitive points on the accuracy of thermal error modeling in long-term processing, and improve the accuracy of thermal error prediction model, which has certain application value. It has guiding significance for thermal error compensation prediction.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 10 May 2021

Zachary Hornberger, Bruce Cox and Raymond R. Hill

Large/stochastic spatiotemporal demand data sets can prove intractable for location optimization problems, motivating the need for aggregation. However, demand aggregation induces…

Abstract

Purpose

Large/stochastic spatiotemporal demand data sets can prove intractable for location optimization problems, motivating the need for aggregation. However, demand aggregation induces errors. Significant theoretical research has been performed related to the modifiable areal unit problem and the zone definition problem. Minimal research has been accomplished related to the specific issues inherent to spatiotemporal demand data, such as search and rescue (SAR) data. This study provides a quantitative comparison of various aggregation methodologies and their relation to distance and volume based aggregation errors.

Design/methodology/approach

This paper introduces and applies a framework for comparing both deterministic and stochastic aggregation methods using distance- and volume-based aggregation error metrics. This paper additionally applies weighted versions of these metrics to account for the reality that demand events are nonhomogeneous. These metrics are applied to a large, highly variable, spatiotemporal demand data set of SAR events in the Pacific Ocean. Comparisons using these metrics are conducted between six quadrat aggregations of varying scales and two zonal distribution models using hierarchical clustering.

Findings

As quadrat fidelity increases the distance-based aggregation error decreases, while the two deliberate zonal approaches further reduce this error while using fewer zones. However, the higher fidelity aggregations detrimentally affect volume error. Additionally, by splitting the SAR data set into training and test sets this paper shows the stochastic zonal distribution aggregation method is effective at simulating actual future demands.

Originality/value

This study indicates no singular best aggregation method exists, by quantifying trade-offs in aggregation-induced errors practitioners can utilize the method that minimizes errors most relevant to their study. Study also quantifies the ability of a stochastic zonal distribution method to effectively simulate future demand data.

Details

Journal of Defense Analytics and Logistics, vol. 5 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Open Access
Article
Publication date: 3 February 2021

Geoff A.M. Loveman and Joel J.E. Edney

The purpose of the present study was the development of a methodology for translating predicted rates of decompression sickness (DCS), following tower escape from a sunken…

Abstract

Purpose

The purpose of the present study was the development of a methodology for translating predicted rates of decompression sickness (DCS), following tower escape from a sunken submarine, into predicted probability of survival, a more useful statistic for making operational decisions.

Design/methodology/approach

Predictions were made, using existing models, for the probabilities of a range of DCS symptoms following submarine tower escape. Subject matter expert estimates of the effect of these symptoms on a submariner’s ability to survive in benign weather conditions on the sea surface until rescued were combined with the likelihoods of the different symptoms occurring using standard probability theory. Plots were generated showing the dependence of predicted probability of survival following escape on the escape depth and the pressure within the stricken submarine.

Findings

Current advice on whether to attempt tower escape is based on avoiding rates of DCS above approximately 5%–10%. Consideration of predicted survival rates, based on subject matter expert opinion, suggests that the current advice might be considered as conservative in the distressed submarine scenario, as DCS rates of 10% are not anticipated to markedly affect survival rates.

Originality/value

According to the authors’ knowledge, this study represents the first attempt to quantify the effect of different DCS symptoms on the probability of survival in submarine tower escape.

Details

Journal of Defense Analytics and Logistics, vol. 5 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Content available
Article
Publication date: 3 August 2018

Scott C. Hewitson, Jonathan D. Ritschel, Edward White and Gregory Brown

Recent legislation resulted in an elevation of operating and support (O&S) costs’ relative importance for decision-making in Department of Defense programs. However, a lack of…

1801

Abstract

Purpose

Recent legislation resulted in an elevation of operating and support (O&S) costs’ relative importance for decision-making in Department of Defense programs. However, a lack of research in O&S hinders a cost analyst’s abilities to provide accurate sustainment estimates. Thus, the purpose of this paper is to investigate when Air Force aircraft O&S costs stabilize and to what degree. Next, a parametric O&S model is developed to predict median O&S costs for use as a new tool for cost analyst practitioners.

Design/methodology/approach

Utilizing the Air Force total ownership cost database, 44 programs consisting of 765 observations from 1996 to 2016 are analyzed. First, stability is examined in three areas: total O&S costs, the six O&S cost element structures and by aircraft type. Next, stepwise regression is used to predict median O&S costs per total active inventory (CPTAI) and identify influential variables.

Findings

Stability results vary by category but generally are found to occur approximately five years from initial operating capability. The regression model explains 89.01 per cent of the variance in the data set when predicting median O&S CPTAI. Aircraft type, location of lead logistics center and unit cost are the three largest contributing factors.

Originality/value

Results from this research provide insight to cost analysts on when to start using actual O&S costs as a baseline for estimates in lieu of analogous cost program data and also derives a new parametric O&S estimating tool designed as a cross-check to current estimating methodologies.

Details

Journal of Defense Analytics and Logistics, vol. 2 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Content available
Article
Publication date: 10 May 2023

Pasquale Legato and Rina Mary Mazza

An integrated queueing network focused on container storage/retrieval operations occurring on the yard of a transshipment hub is proposed. The purpose of the network is to support…

Abstract

Purpose

An integrated queueing network focused on container storage/retrieval operations occurring on the yard of a transshipment hub is proposed. The purpose of the network is to support decisions related to the organization of the yard area, while also accounting for operations policies and times on the quay.

Design/methodology/approach

A discrete-event simulation model is used to reproduce container handling on both the quay and yard areas, along with the transfer operations between the two. The resulting times, properly estimated by the simulation output, are fed to a simpler queueing network amenable to solution via algorithms based on mean value analysis (MVA) for product-form networks.

Findings

Numerical results justify the proposed approach for getting a fast, yet accurate analytical solution that allows carrying out performance evaluation with respect to both organizational policies and operations management on the yard area.

Practical implications

Practically, the expected performance measures on the yard subsystem can be obtained avoiding additional time-expensive simulation experiments on the entire detailed model.

Originality/value

As a major takeaway, deepening the MVA for generally distributed service times has proven to produce reliable estimations on expected values for both user- and system-oriented performance metrics.

Details

Maritime Business Review, vol. 8 no. 4
Type: Research Article
ISSN: 2397-3757

Keywords

Content available
Article
Publication date: 10 February 2022

Buddhi A. Weerasinghe, H. Niles Perera and Phillip Kießner

This paper examines how the altering nature of planning decisions affects operational efficiency in seaport container terminals. The uncertainty and the role of the planner were…

1723

Abstract

Purpose

This paper examines how the altering nature of planning decisions affects operational efficiency in seaport container terminals. The uncertainty and the role of the planner were investigated considering the dynamic integrated planning function of the quay to yard interface.

Design/methodology/approach

A system dynamics model has been built to illustrate the integrated dynamic environment. Data collection was conducted at a leading container terminal at a hub port. The model was simulated for different scenarios to derive findings.

Findings

The planner has been identified as the agent who makes alterations between the initial operational plan and the actual plan. The initial plan remains uncertain even when there is no impact from crane breakdowns, requiring a significant number of alterations to be made. The planner who had worked on the yard plan had altered (approximately 45%) the initial plan than the alterations done by the planner who had worked on the vessel plan. As a result, the feedback loop that is created by the remaining moves at each hourly operation influences the upcoming operation as much as crane breakdowns influence.

Originality/value

The uncertainty and the role of the planner were investigated considering the dynamic integrated planning function of the quay to yard interface. The findings of this study are significant since terminal efficiency is examined considering the quayside and landside as an integrated system.

Details

Maritime Business Review, vol. 8 no. 1
Type: Research Article
ISSN: 2397-3757

Keywords

Access

Only content I have access to

Year

All dates (30753)

Content type

1 – 10 of over 30000