Search results
1 – 8 of 8Huihuang Zhao, Jianzhen Chen, Shibiao Xu, Ying Wang and Zhijun Qiao
The purpose of this paper is to develop a compressive sensing (CS) algorithm for noisy solder joint imagery compression and recovery. A fast gradient-based compressive sensing…
Abstract
Purpose
The purpose of this paper is to develop a compressive sensing (CS) algorithm for noisy solder joint imagery compression and recovery. A fast gradient-based compressive sensing (FGbCS) approach is proposed based on the convex optimization. The proposed algorithm is able to improve performance in terms of peak signal noise ratio (PSNR) and computational cost.
Design/methodology/approach
Unlike traditional CS methods, the authors first transformed a noise solder joint image to a sparse signal by a discrete cosine transform (DCT), so that the reconstruction of noisy solder joint imagery is changed to a convex optimization problem. Then, a so-called gradient-based method is utilized for solving the problem. To improve the method efficiency, the authors assume the problem to be convex with the Lipschitz gradient through the replacement of an iteration parameter by the Lipschitz constant. Moreover, a FGbCS algorithm is proposed to recover the noisy solder joint imagery under different parameters.
Findings
Experiments reveal that the proposed algorithm can achieve better results on PNSR with fewer computational costs than classical algorithms like Orthogonal Matching Pursuit (OMP), Greedy Basis Pursuit (GBP), Subspace Pursuit (SP), Compressive Sampling Matching Pursuit (CoSaMP) and Iterative Re-weighted Least Squares (IRLS). Convergence of the proposed algorithm is with a faster rate O(k*k) instead of O(1/k).
Practical implications
This paper provides a novel methodology for the CS of noisy solder joint imagery, and the proposed algorithm can also be used in other imagery compression and recovery.
Originality/value
According to the CS theory, a sparse or compressible signal can be represented by a fewer number of bases than those required by the Nyquist theorem. The new development might provide some fundamental guidelines for noisy imagery compression and recovering.
Details
Keywords
Konrad Farrugia, Matthew Attard and Peter J. Baldacchino
This study delves into the determinants and praxis of derivative hedging instruments (DHIs) usage of Malta, a small island state. Empirical evidence is also provided in relation…
Abstract
This study delves into the determinants and praxis of derivative hedging instruments (DHIs) usage of Malta, a small island state. Empirical evidence is also provided in relation to the impact of DHI usage and the adoption of a hedge accounting (HA) model in entities’ financial statements. A mixed methodology design is deployed involving: (1) a series of statistical models and tests and (2) seven semi-structured interviews with senior professionals.
The data collected comprise proxy variable values collected from the financial statements of 568 firm-years from 107 Maltese entities between the years 2009 and 2014. Greater likelihood of financial distress, decreasing investment efficiency and increased levels of gearing, are identified as being significant determinants for the use of DHIs. Although DHI usage is low in comparison to larger states, it has been increasing over the period under study.
HA is evidenced to be less popular in Malta, but the study evidences correlation between certain DHIs and HA usage. The quantitative statistical model results in evidence with no significant earnings volatility (EV) or cash flow volatility (CFV) reduction effects through the application of HA. Albeit, the study finds a significant CFV reduction effect emanating from DHI usage, but no corresponding EV reduction effect.
Better education and dissemination of the HA treatment by auditors and regulatory bodies could help propagate the HA treatment, potentially enhancing the EV reduction effectiveness of DHI use. This research provides empirical evidence to substantiate the rationale behind utilising DHIs in smaller island states, especially when coupled with a sound risk management culture.
Details
Keywords
Daniel Marjavaara and Staffan Lundström
This paper aims to develop an efficient and accurate numerical method that can be used in the design process of the waterways in a hydropower plant.
Abstract
Purpose
This paper aims to develop an efficient and accurate numerical method that can be used in the design process of the waterways in a hydropower plant.
Design/methodology/approach
A range of recently published (2002‐2006) works, which aim to form the basis of a shape optimization tool for flow design and to increase the knowledge within the field of computational fluid dynamics (CFD) and surrogate‐based optimization techniques.
Findings
Provides information about how crude the optimization method can be regarding, for example, the design variables, the numerical noise and the multi objectives, etc.
Research limitations/implications
It does not give a detailed interpretation of the flow behaviour due to the lack of validation data.
Practical implications
A very useful flow design methodology that can be used in both academy and industry.
Originality/value
Shape optimization of hydraulic turbine draft tubes with aid of CFD and numerical optimization techniques has not been performed until recently due to the high CPU requirements on CFD simulations. The paper investigates the possibilities of using the global optimization algorithm response surface methodology in the design process of a full scale hydraulic turbine draft tube.
Details
Keywords
In this second installment, the author addresses some of the problems associated with empirically validating contingent‐claim models for valuing risky debt. The article uses a…
Abstract
In this second installment, the author addresses some of the problems associated with empirically validating contingent‐claim models for valuing risky debt. The article uses a simple contingent claims risky debt valuation model to fit term structures of credit spreads derived from data for U.S. corporate bonds. An essential component to fitting this model is the use of expected default frequency; the estimate of the firms' expected default probability over a specific time horizon. The author discusses the statistical and econometric procedures used in fitting the term structure of credit spreads and estimating model parameters. These include iteratively reweighted non‐linear least squares are used to dampen the impact of outliers and ensure convergence in each cross‐sectional estimation from 1992 to 1999.
Syntax-based text classification (TC) mechanisms have been overtly replaced by semantic-based systems in recent years. Semantic-based TC systems are particularly useful in those…
Abstract
Purpose
Syntax-based text classification (TC) mechanisms have been overtly replaced by semantic-based systems in recent years. Semantic-based TC systems are particularly useful in those scenarios where similarity among documents is computed considering semantic relationships among their terms. Kernel functions have received major attention because of the unprecedented popularity of SVMs in the field of TC. Most of the kernel functions exploit syntactic structures of the text, but quite a few also use a priori semantic information for knowledge extraction. The purpose of this paper is to investigate semantic kernel functions in the context of TC.
Design/methodology/approach
This work presents performance and accuracy analysis of seven semantic kernel functions (Semantic Smoothing Kernel, Latent Semantic Kernel, Semantic WordNet-based Kernel, Semantic Smoothing Kernel having Implicit Superconcept Expansions, Compactness-based Disambiguation Kernel Function, Omiotis-based S-VSM semantic kernel function and Top-k S-VSM semantic kernel) being implemented with SVM as kernel method. All seven semantic kernels are implemented in SVM-Light tool.
Findings
Performance and accuracy parameters of seven semantic kernel functions have been evaluated and compared. The experimental results show that Top-k S-VSM semantic kernel has the highest performance and accuracy among all the evaluated kernel functions which make it a preferred building block for kernel methods for TC and retrieval.
Research limitations/implications
A combination of semantic kernel function with syntactic kernel function needs to be investigated as there is a scope of further improvement in terms of accuracy and performance in all the seven semantic kernel functions.
Practical implications
This research provides an insight into TC using a priori semantic knowledge. Three commonly used data sets are being exploited. It will be quite interesting to explore these kernel functions on live web data which may test their actual utility in real business scenarios.
Originality/value
Comparison of performance and accuracy parameters is the novel point of this research paper. To the best of the authors’ knowledge, this type of comparison has not been done previously.
Details
Keywords
Matthew Powers and Brian O'Flynn
Rapid sensitivity analysis and near-optimal decision-making in contested environments are valuable requirements when providing military logistics support. Port of debarkation…
Abstract
Purpose
Rapid sensitivity analysis and near-optimal decision-making in contested environments are valuable requirements when providing military logistics support. Port of debarkation denial motivates maneuver from strategic operational locations, further complicating logistics support. Simulations enable rapid concept design, experiment and testing that meet these complicated logistic support demands. However, simulation model analyses are time consuming as output data complexity grows with simulation input. This paper proposes a methodology that leverages the benefits of simulation-based insight and the computational speed of approximate dynamic programming (ADP).
Design/methodology/approach
This paper describes a simulated contested logistics environment and demonstrates how output data informs the parameters required for the ADP dialect of reinforcement learning (aka Q-learning). Q-learning output includes a near-optimal policy that prescribes decisions for each state modeled in the simulation. This paper's methods conform to DoD simulation modeling practices complemented with AI-enabled decision-making.
Findings
This study demonstrates simulation output data as a means of state–space reduction to mitigate the curse of dimensionality. Furthermore, massive amounts of simulation output data become unwieldy. This work demonstrates how Q-learning parameters reflect simulation inputs so that simulation model behavior can compare to near-optimal policies.
Originality/value
Fast computation is attractive for sensitivity analysis while divorcing evaluation from scenario-based limitations. The United States military is eager to embrace emerging AI analytic techniques to inform decision-making but is hesitant to abandon simulation modeling. This paper proposes Q-learning as an aid to overcome cognitive limitations in a way that satisfies the desire to wield AI-enabled decision-making combined with modeling and simulation.
Details