Search results
1 – 10 of over 8000Matthew Powers and Brian O'Flynn
Rapid sensitivity analysis and near-optimal decision-making in contested environments are valuable requirements when providing military logistics support. Port of debarkation…
Abstract
Purpose
Rapid sensitivity analysis and near-optimal decision-making in contested environments are valuable requirements when providing military logistics support. Port of debarkation denial motivates maneuver from strategic operational locations, further complicating logistics support. Simulations enable rapid concept design, experiment and testing that meet these complicated logistic support demands. However, simulation model analyses are time consuming as output data complexity grows with simulation input. This paper proposes a methodology that leverages the benefits of simulation-based insight and the computational speed of approximate dynamic programming (ADP).
Design/methodology/approach
This paper describes a simulated contested logistics environment and demonstrates how output data informs the parameters required for the ADP dialect of reinforcement learning (aka Q-learning). Q-learning output includes a near-optimal policy that prescribes decisions for each state modeled in the simulation. This paper's methods conform to DoD simulation modeling practices complemented with AI-enabled decision-making.
Findings
This study demonstrates simulation output data as a means of state–space reduction to mitigate the curse of dimensionality. Furthermore, massive amounts of simulation output data become unwieldy. This work demonstrates how Q-learning parameters reflect simulation inputs so that simulation model behavior can compare to near-optimal policies.
Originality/value
Fast computation is attractive for sensitivity analysis while divorcing evaluation from scenario-based limitations. The United States military is eager to embrace emerging AI analytic techniques to inform decision-making but is hesitant to abandon simulation modeling. This paper proposes Q-learning as an aid to overcome cognitive limitations in a way that satisfies the desire to wield AI-enabled decision-making combined with modeling and simulation.
Details
Keywords
Eamonn O'Connor, Stephen Hynes, Amaya Vega and Natasha Evers
The purpose of this paper is to examine performance change in the Irish state-owned port sector over the 2000-2016 period using a case study approach.
Abstract
Purpose
The purpose of this paper is to examine performance change in the Irish state-owned port sector over the 2000-2016 period using a case study approach.
Design/methodology/approach
For analysis, qualitative sources are used to construct an explanatory account for the quantitative measures of productivity, profitability and traffic shift-share change across the major ports within the system.
Findings
The results show that overall change in performance largely follows that of the macro-economic performance of the region, characterised by pre-recession growth, decline during the recession and post-recession recovery. Across the ports, however, there was a notable divergence in performance post-recession. Identified factors affecting performance change across the period include demand-side structural change, labour rationalisation and degree of private sector participation.
Originality/value
This study addresses a gap in the formal evaluation of port performance in Ireland. The study further demonstrates the potential of in-depth case study analysis for uncovering insights into the drivers of performance across a number of dimensions, thus allowing for the contextualisation of results. The study of a small number of cases enables the use of rich qualitative sources to create strong narratives, which combined with quantitative measures of performance, can lead to new insights.
Details
Keywords
Jihong Chen, Renjie Zhao, Wenjing Xiong, Zheng Wan, Lang Xu and Weipan Zhang
The paper aims to identify the contributors to freight rate fluctuations in the Suezmax tanker market; this study selected the refinery output, crude oil price, one-year charter…
Abstract
Purpose
The paper aims to identify the contributors to freight rate fluctuations in the Suezmax tanker market; this study selected the refinery output, crude oil price, one-year charter rate and fleet development as the main influencing factors for the market analysis.
Design/methodology/approach
The paper used the vector error correction model to evaluate the degree of impact of each influencing factor on Suezmax tanker freight rates, as well as the interplay between these factors.
Findings
The conclusion and results were tested using the 20-year data from 1999 to 2019, and the methodology and theory of this paper were proved to be effective. Results of this study provide effective reference for scholars to find the law of fluctuations in Suezmax tanker freight rates.
Originality/value
This paper provides a decision-making support tool for tanker operators to cope with fluctuation risks in the tanker shipping market.
Details
Keywords
Cleyton Farias and Marcelo Silva
The authors explore the hypothesis that some movements in commodity prices are anticipated (news shocks) and can trigger aggregate fluctuations in small open emerging economies…
Abstract
Purpose
The authors explore the hypothesis that some movements in commodity prices are anticipated (news shocks) and can trigger aggregate fluctuations in small open emerging economies. This paper aims to discuss the aforementioned objective.
Design/methodology/approach
The authors build a multi-sector dynamic stochastic general equilibrium model with endogenous commodity production. There are five exogenous processes: a country-specific interest rate shock that responds to commodity price fluctuations, a productivity (TFP) shock for each sector and a commodity price shock. Both TFP and commodity price shocks are composed of unanticipated and anticipated components.
Findings
The authors show that news shocks to commodity prices lead to higher output, investment and consumption, and a countercyclical movement in the trade-balance-to-output ratio. The authors also show that commodity price news shocks explain about 24% of output aggregate fluctuations in the small open economy.
Practical implications
Given the importance of both anticipated and unanticipated commodity price shocks, policymakers should pay attention to developments in commodity markets when designing policies to attenuate the business cycles. Future research should investigate the design of optimal fiscal and monetary policies in SOE subject to news shocks in commodity prices.
Originality/value
This paper contributes to the knowledge of the sources of fluctuations in emerging economies highlighting the importance of a new source: news shocks in commodity prices.
Details
Keywords
Yong Ding, Peixiong Huang, Hai Liang, Fang Yuan and Huiyong Wang
Recently, deep learning (DL) has been widely applied in various aspects of human endeavors. However, studies have shown that DL models may also be a primary cause of data leakage…
Abstract
Purpose
Recently, deep learning (DL) has been widely applied in various aspects of human endeavors. However, studies have shown that DL models may also be a primary cause of data leakage, which raises new data privacy concerns. Membership inference attacks (MIAs) are prominent threats to user privacy from DL model training data, as attackers investigate whether specific data samples exist in the training data of a target model. Therefore, the aim of this study is to develop a method for defending against MIAs and protecting data privacy.
Design/methodology/approach
One possible solution is to propose an MIA defense method that involves adjusting the model’s output by mapping the output to a distribution with equal probability density. This approach effectively preserves the accuracy of classification predictions while simultaneously preventing attackers from identifying the training data.
Findings
Experiments demonstrate that the proposed defense method is effective in reducing the classification accuracy of MIAs to below 50%. Because MIAs are viewed as a binary classification model, the proposed method effectively prevents privacy leakage and improves data privacy protection.
Research limitations/implications
The method is only designed to defend against MIA in black-box classification models.
Originality/value
The proposed MIA defense method is effective and has a low cost. Therefore, the method enables us to protect data privacy without incurring significant additional expenses.
Details
Keywords
Joe Garcia, Russell Shannon, Aaron Jacobson, William Mosca, Michael Burger and Roberto Maldonado
This paper aims to describe an effort to provide for a robust and secure software development paradigm intended to support DevSecOps in a naval aviation enterprise (NAE) software…
Abstract
Purpose
This paper aims to describe an effort to provide for a robust and secure software development paradigm intended to support DevSecOps in a naval aviation enterprise (NAE) software support activity (SSA), with said paradigm supporting strong traceability and provability concerning the SSA’s output product, known as an operational flight program (OFP). Through a secure development environment (SDE), each critical software development function performed on said OFP during its development has a corresponding record represented on a blockchain.
Design/methodology/approach
An SDE is implemented as a virtual machine or container incorporating software development tools that are modified to support blockchain transactions. Each critical software development function, e.g. editing, compiling, linking, generates a blockchain transaction message with associated information embedded in the output of a said function that, together, can be used to prove integrity and support traceability. An attestation process is used to provide proof that the toolchain containing SDE is not subject to unauthorized modification at the time said critical function is performed.
Findings
Blockchain methods are shown to be a viable approach for supporting exhaustive traceability and strong provability of development system integrity for mission-critical software produced by an NAE SSA for NAE embedded systems software.
Practical implications
A blockchain-based authentication approach that could be implemented at the OFP point-of-load would provide for fine-grain authentication of all OFP software components, with each component or module having its own proof-of-integrity (including the integrity of the used development tools) over its entire development history.
Originality/value
Many SSAs have established control procedures for development such as check-out/check-in. This does not prove the SSA output software is secure. For one thing, a build system does not necessarily enforce procedures in a way that is determinable from the output. Furthermore, the SSA toolchain itself could be attacked. The approach described in this paper enforces security policy and embeds information into the output of every development function that can be cross-referenced to blockchain transaction records for provability and traceability that only trusted tools, free from unauthorized modifications, are used in software development. A key original concept of this approach is that it treats assigned developer time as a transferable digital currency.
Details
Keywords
- Software development
- Blockchain
- Cybersecurity
- Operational flight program
- Secure development environment
- Secure virtual machine
- Zero trust
- Embedded systems
- Mission-critical systems
- OFP
- DevOps
- DevSecOps
- Software support activity
- SSA
- SDE
- Permissioned blockchain
- Cryptocurrency
- Time-limited authorization for developer action
- TADA
- Code signing
- Trusted software guard
- SGX
- Trusted eXecution technology
- TXT
- Trusted platform module
- Self-hosting
- Controlled access blockchain
- CABlock
- Role-based access control
- RBAC
Theresa A. Kirchner, Linda L. Golden and Patrick L. Brockett
This longitudinal research examines US symphony orchestra sector organizations to determine individual efficiencies in allocating resources (donations, governmental/private…
Abstract
Purpose
This longitudinal research examines US symphony orchestra sector organizations to determine individual efficiencies in allocating resources (donations, governmental/private funding, etc.) for desirable outputs (concerts, educational programs, community outreach). It provides researchers and managers with a tool for identifying, assessing and mitigating organizational inefficiencies.
Design/methodology/approach
This study assesses relative efficiencies in performing arts organizations using Data Envelopment Analysis (DEA), a widely-used nonparametric data-intensive benchmarking technique that determines an optimal “production frontier” of best-practice organizations among their peers and assesses their abilities to turn multivariate inputs into multivariate desired outputs.
Findings
This analysis highlights efficiency differences in a wide range of orchestras in converting available resources into performance-related outputs. It provides individual arts organizations with useful results for developing practical benchmarks to achieve organizational efficiency improvement.
Research limitations/implications
This study provides constructive benchmarking guidance for improving efficiencies of relatively-inefficient organizations. Future analysis can expand the scope to utilize a two-stage DEA model to provide more specific guidance to arts organizations.
Practical implications
This pragmatic analysis enables arts/culture institutions to assess their organizational efficiencies and identify opportunities to optimize resources in producing social outputs for their target markets.
Social implications
Efficiency improvements enable performing arts organizations to provide additional artistic/social services, with fewer resources, to larger audiences.
Originality/value
This research demonstrates the abilities of DEA analysis to assess both a sector and its individual organizations to determine efficiencies, identify sources of inefficiencies and assess longitudinal efficiency trends.
Details
Keywords
Petra Pekkanen and Timo Pirttilä
The aim of this study is to empirically explore and analyze the concrete tasks of output measurement and the inherent challenges related to these tasks in a traditional and…
Abstract
Purpose
The aim of this study is to empirically explore and analyze the concrete tasks of output measurement and the inherent challenges related to these tasks in a traditional and autonomous professional public work setting – the judicial system.
Design/methodology/approach
The analysis of the tasks is based on a categorization of general performance measurement motives (control-motivate-learn) and main stakeholder levels (society-organization-professionals). The analysis is exploratory and conducted as an empirical content analysis on materials and reports produced in two performance improvement projects conducted in European justice organizations.
Findings
The identified main tasks in the different categories are related to managing resources, controlling performance deviations, and encouraging improvement and development of performance. Based on the results, key improvement areas connected to output measurement in professional public organizations are connected to the improvement of objectivity and fairness in budgeting and work allocation practices, improvement of output measures' versatility and informativeness to highlight motivational and learning purposes, improvement of professional self-management in setting output targets and producing outputs, as well as improvement of organizational learning from the output measurement.
Practical implications
The paper presents empirically founded practical examples of challenges and improvement opportunities related to the tasks of output measurement in professional public organization.
Originality/value
This paper fulfils an identified need to study how general performance management motives realize as concrete tasks of output measurement in justice organizations.
Details
Keywords
Mesbah Fathy Sharaf and Abdelhalem Mahmoud Shahen
This paper investigates the asymmetric impact of the real effective exchange rate (REER) on Egypt's real domestic output from 1960 to 2020.
Abstract
Purpose
This paper investigates the asymmetric impact of the real effective exchange rate (REER) on Egypt's real domestic output from 1960 to 2020.
Design/methodology/approach
A Nonlinear Autoregressive Distributed Lag (NARDL) model is utilized to isolate real currency depreciations from appreciations and account for the potential asymmetry in the impact of the REER. The analyses account for the various channels via which the REER could affect domestic output.
Findings
Results show evidence of a long-run asymmetry in the output effect of REER changes in which only real currency depreciations have a contractionary impact on output, while the REER has no impact on output in the short run.
Practical implications
The Egyptian monetary authority cannot rely on domestic currency depreciation as a policy instrument to boost domestic output.
Originality/value
Unlike most of the previous studies, which assume linearity in the impact of the REER on output, we relax this assumption and hypothesize that the REER changes have an asymmetric effect on the Egyptian domestic output in Egypt. We use a long time span from 1960 to 2020 and control for the potential structural breaks in the REER-output nexus and the various channels through which the REER can affect domestic output.
Details
Keywords
Lukas Zenk, Dirk J. Primus and Stephan Sonnenburg
Do LEGO® SERIOUS PLAY® (LSP) workshops result in improved experience of flow components as well as higher levels of creative output than traditional meetings (MEET)? This research…
Abstract
Purpose
Do LEGO® SERIOUS PLAY® (LSP) workshops result in improved experience of flow components as well as higher levels of creative output than traditional meetings (MEET)? This research studies the extent to which LSP, as a specialized material-mediated and process-oriented cocreative workshop setting, differs from MEET, a traditional workshop setting. Hypotheses for differences in individual flow components (autotelic behavior, happiness, balance), group flow components (equal participation, continuous communication) and creative output were developed and tested in a quasi-experimental comparison between LSP and MEET.
Design/methodology/approach
The study was conducted with 39 practitioners in six teams from various industries. In total, 164 observations were collected during two workshops using the Experience Sampling Method. The creative output was assessed by peer evaluations of all participants, followed by structural analysis and quantitative group comparisons.
Findings
The results show that two components of individual flow experience (autotelic behavior, happiness) were significantly higher in LSP, and one of the components of group flow experience (continuous communication) was, as expected, significantly lower. Regarding creative output, the LSP teams outperformed the MEET teams. The study suggests that a process-oriented setting that includes time for individuals to independently explore their ideas using a different kind of material in the presence of other participants has a significant influence on the team result.
Practical implications
LSP can improve the components of participants' flow experience to have an impact on the creative output of teams. In cocreative settings like LSP, teams benefit from a combination of alone time and high-quality collaborative activities using boundary objects and a clear process to share their ideas.
Originality/value
This is the first quasi-experimental study with management practitioners as participants to compare LSP with a traditional and widespread workshop approach in the context of flow experience and creative output.
Details