Search results

1 – 4 of 4
Content available
Article
Publication date: 14 May 2020

Matthew D. Ferguson, Raymond Hill and Brian Lunday

This study aims to compare linear programming and stable marriage approaches to the personnel assignment problem under conditions of uncertainty. Robust solutions should exhibit…

Abstract

Purpose

This study aims to compare linear programming and stable marriage approaches to the personnel assignment problem under conditions of uncertainty. Robust solutions should exhibit reduced variability of solutions in the presence of one or more additional constraints or problem perturbations added to some baseline problems.

Design/methodology/approach

Several variations of each approach are compared with respect to solution speed, solution quality as measured by officer-to-assignment preferences and solution robustness as measured by the number of assignment changes required after inducing a set of representative perturbations or constraints to an assignment instance. These side constraints represent the realistic assignment categorical priorities and limitations encountered by army assignment managers who solve this problem semiannually, and thus the synthetic instances considered herein emulate typical problem instances.

Findings

The results provide insight regarding the trade-offs between traditional optimization and heuristic-based solution approaches.

Originality/value

The results indicate the viability of using the stable marriage algorithm for talent management via the talent marketplace currently used by both the U.S. Army and U.S. Air Force for personnel assignments.

Details

Journal of Defense Analytics and Logistics, vol. 4 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Content available
Article
Publication date: 1 March 1999

39

Abstract

Details

Facilities, vol. 17 no. 3/4
Type: Research Article
ISSN: 0263-2772

Keywords

Content available
Article
Publication date: 1 June 1999

51

Abstract

Details

Property Management, vol. 17 no. 2
Type: Research Article
ISSN: 0263-7472

Keywords

Content available
Article
Publication date: 15 November 2022

Matthew Powers and Brian O'Flynn

Rapid sensitivity analysis and near-optimal decision-making in contested environments are valuable requirements when providing military logistics support. Port of debarkation…

Abstract

Purpose

Rapid sensitivity analysis and near-optimal decision-making in contested environments are valuable requirements when providing military logistics support. Port of debarkation denial motivates maneuver from strategic operational locations, further complicating logistics support. Simulations enable rapid concept design, experiment and testing that meet these complicated logistic support demands. However, simulation model analyses are time consuming as output data complexity grows with simulation input. This paper proposes a methodology that leverages the benefits of simulation-based insight and the computational speed of approximate dynamic programming (ADP).

Design/methodology/approach

This paper describes a simulated contested logistics environment and demonstrates how output data informs the parameters required for the ADP dialect of reinforcement learning (aka Q-learning). Q-learning output includes a near-optimal policy that prescribes decisions for each state modeled in the simulation. This paper's methods conform to DoD simulation modeling practices complemented with AI-enabled decision-making.

Findings

This study demonstrates simulation output data as a means of state–space reduction to mitigate the curse of dimensionality. Furthermore, massive amounts of simulation output data become unwieldy. This work demonstrates how Q-learning parameters reflect simulation inputs so that simulation model behavior can compare to near-optimal policies.

Originality/value

Fast computation is attractive for sensitivity analysis while divorcing evaluation from scenario-based limitations. The United States military is eager to embrace emerging AI analytic techniques to inform decision-making but is hesitant to abandon simulation modeling. This paper proposes Q-learning as an aid to overcome cognitive limitations in a way that satisfies the desire to wield AI-enabled decision-making combined with modeling and simulation.

Details

Journal of Defense Analytics and Logistics, vol. 6 no. 2
Type: Research Article
ISSN: 2399-6439

Keywords

1 – 4 of 4