Search results
1 – 10 of 71
Jared Nystrom, Raymond R. Hill, Andrew Geyer, Joseph J. Pignatiello and Eric Chicken
Present a method to impute missing data from a chaotic time series, in this case lightning prediction data, and then use that completed dataset to create lightning prediction…
Abstract
Purpose
Present a method to impute missing data from a chaotic time series, in this case lightning prediction data, and then use that completed dataset to create lightning prediction forecasts.
Design/methodology/approach
Using the technique of spatiotemporal kriging to estimate data that is autocorrelated but in space and time. Using the estimated data in an imputation methodology completes a dataset used in lightning prediction.
Findings
The techniques provided prove robust to the chaotic nature of the data, and the resulting time series displays evidence of smoothing while also preserving the signal of interest for lightning prediction.
Research limitations/implications
The research is limited to the data collected in support of weather prediction work through the 45th Weather Squadron of the United States Air Force.
Practical implications
These methods are important due to the increasing reliance on sensor systems. These systems often provide incomplete and chaotic data, which must be used despite collection limitations. This work establishes a viable data imputation methodology.
Social implications
Improved lightning prediction, as with any improved prediction methods for natural weather events, can save lives and resources due to timely, cautious behaviors as a result of the predictions.
Originality/value
Based on the authors’ knowledge, this is a novel application of these imputation methods and the forecasting methods.
Details
Keywords
Joshua L. McDonald, Edward D. White, Raymond R. Hill and Christian Pardo
The purpose of this paper is to demonstrate an improved method for forecasting the US Army recruiting.
Abstract
Purpose
The purpose of this paper is to demonstrate an improved method for forecasting the US Army recruiting.
Design/methodology/approach
Time series methods, regression modeling, principle components and marketing research are included in this paper.
Findings
This paper found the unique ability of multiple statistical methods applied to a forecasting context to consider the effects of inputs that are controlled to some degree by a decision maker.
Research limitations/implications
This work will successfully inform the US Army recruiting leadership on how this improved methodology will improve their recruitment process.
Practical implications
Improved US Army analytical technique for forecasting recruiting goals..
Originality/value
This work culls data from open sources, using a zip-code-based classification method to develop more comprehensive forecasting methods with which US Army recruiting leaders can better establish recruiting goals.
Details
Keywords
Zachary Hornberger, Bruce Cox and Raymond R. Hill
Large/stochastic spatiotemporal demand data sets can prove intractable for location optimization problems, motivating the need for aggregation. However, demand aggregation induces…
Abstract
Purpose
Large/stochastic spatiotemporal demand data sets can prove intractable for location optimization problems, motivating the need for aggregation. However, demand aggregation induces errors. Significant theoretical research has been performed related to the modifiable areal unit problem and the zone definition problem. Minimal research has been accomplished related to the specific issues inherent to spatiotemporal demand data, such as search and rescue (SAR) data. This study provides a quantitative comparison of various aggregation methodologies and their relation to distance and volume based aggregation errors.
Design/methodology/approach
This paper introduces and applies a framework for comparing both deterministic and stochastic aggregation methods using distance- and volume-based aggregation error metrics. This paper additionally applies weighted versions of these metrics to account for the reality that demand events are nonhomogeneous. These metrics are applied to a large, highly variable, spatiotemporal demand data set of SAR events in the Pacific Ocean. Comparisons using these metrics are conducted between six quadrat aggregations of varying scales and two zonal distribution models using hierarchical clustering.
Findings
As quadrat fidelity increases the distance-based aggregation error decreases, while the two deliberate zonal approaches further reduce this error while using fewer zones. However, the higher fidelity aggregations detrimentally affect volume error. Additionally, by splitting the SAR data set into training and test sets this paper shows the stochastic zonal distribution aggregation method is effective at simulating actual future demands.
Originality/value
This study indicates no singular best aggregation method exists, by quantifying trade-offs in aggregation-induced errors practitioners can utilize the method that minimizes errors most relevant to their study. Study also quantifies the ability of a stochastic zonal distribution method to effectively simulate future demand data.
Details
Keywords
Petar Jackovich, Bruce Cox and Raymond R. Hill
This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and…
Abstract
Purpose
This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and vertex-greedy subclasses. As these subclasses of heuristics can create subtours, two known methodologies for subtour elimination on symmetric instances are reviewed and are expanded to cover asymmetric problem instances. This paper introduces a third novel subtour elimination methodology, the greedy tracker (GT), and compares it to both known methodologies.
Design/methodology/approach
Computational results for all three subtour elimination methodologies are generated across 17 symmetric instances ranging in size from 29 vertices to 5,934 vertices, as well as 9 asymmetric instances ranging in size from 17 to 443 vertices.
Findings
The results demonstrate the GT is the fastest method for preventing subtours for instances below 400 vertices. Additionally, a distinction between fragment constructive heuristics and the subtour elimination methodology used to ensure the feasibility of resulting solutions enables the introduction of a new vertex-greedy fragment heuristic called ordered greedy.
Originality/value
This research has two main contributions: first, it introduces a novel subtour elimination methodology. Second, the research introduces the concept of ordered lists which remaps the TSP into a new space with promising initial computational results.
Details
Keywords
Sarah Neumann, Darryl Ahner and Raymond R. Hill
This paper aims to examine whether changing the clustering of countries within a United States Combatant Command (COCOM) area of responsibility promotes improved forecasting of…
Abstract
Purpose
This paper aims to examine whether changing the clustering of countries within a United States Combatant Command (COCOM) area of responsibility promotes improved forecasting of conflict.
Design/methodology/approach
In this paper statistical learning methods are used to create new country clusters that are then used in a comparative analysis of model-based conflict prediction.
Findings
In this study a reorganization of the countries assigned to specific areas of responsibility are shown to provide improvements in the ability of models to predict conflict.
Research limitations/implications
The study is based on actual historical data and is purely data driven.
Practical implications
The study demonstrates the utility of the analytical methodology but carries not implementation recommendations.
Originality/value
This is the first study to use the statistical methods employed to not only investigate the re-clustering of countries but more importantly the impact of that change on analytical predictions.
Details
Keywords
Matthew D. Ferguson, Raymond Hill and Brian Lunday
This study aims to compare linear programming and stable marriage approaches to the personnel assignment problem under conditions of uncertainty. Robust solutions should exhibit…
Abstract
Purpose
This study aims to compare linear programming and stable marriage approaches to the personnel assignment problem under conditions of uncertainty. Robust solutions should exhibit reduced variability of solutions in the presence of one or more additional constraints or problem perturbations added to some baseline problems.
Design/methodology/approach
Several variations of each approach are compared with respect to solution speed, solution quality as measured by officer-to-assignment preferences and solution robustness as measured by the number of assignment changes required after inducing a set of representative perturbations or constraints to an assignment instance. These side constraints represent the realistic assignment categorical priorities and limitations encountered by army assignment managers who solve this problem semiannually, and thus the synthetic instances considered herein emulate typical problem instances.
Findings
The results provide insight regarding the trade-offs between traditional optimization and heuristic-based solution approaches.
Originality/value
The results indicate the viability of using the stable marriage algorithm for talent management via the talent marketplace currently used by both the U.S. Army and U.S. Air Force for personnel assignments.
Details
Keywords
Nickolas Zaller, Lisa Barry, Jane Dorotik, Jennifer James, Andrea K. Knittel, Fernando Murillo, Stephanie Grace Prost and Brie Williams