Search results

1 – 10 of 838
To view the access options for this content please click here
Article

Shuangshuang Liu and Xiaoling Li

Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing. In…

Abstract

Purpose

Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing. In order to solve such problems, the purpose of this paper is to propose a novel image super-resolution algorithm based on improved generative adversarial networks (GANs) with Wasserstein distance and gradient penalty.

Design/methodology/approach

The proposed algorithm first introduces the conventional GANs architecture, the Wasserstein distance and the gradient penalty for the task of image super-resolution reconstruction (SRWGANs-GP). In addition, a novel perceptual loss function is designed for the SRWGANs-GP to meet the task of image super-resolution reconstruction. The content loss is extracted from the deep model’s feature maps, and such features are introduced to calculate mean square error (MSE) for the loss calculation of generators.

Findings

To validate the effectiveness and feasibility of the proposed algorithm, a lot of compared experiments are applied on three common data sets, i.e. Set5, Set14 and BSD100. Experimental results have shown that the proposed SRWGANs-GP architecture has a stable error gradient and iteratively convergence. Compared with the baseline deep models, the proposed GANs models have a significant improvement on performance and efficiency for image super-resolution reconstruction. The MSE calculated by the deep model’s feature maps gives more advantages for constructing contour and texture.

Originality/value

Compared with the state-of-the-art algorithms, the proposed algorithm obtains a better performance on image super-resolution and better reconstruction results on contour and texture.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article

Hiroshi Okuda, Shinobu Yoshimura, Genki Yagawa and Akihiro Matsuda

Describes the parameter estimation procedures for the non‐linear finite element analysis using the hierarchical neural network. These procedures can be classified as the…

Abstract

Describes the parameter estimation procedures for the non‐linear finite element analysis using the hierarchical neural network. These procedures can be classified as the neural network based inverse analysis, which has been investigated by the authors. The optimum values of the parameters involved in the non‐linear finite element analysis are generally dependent on the configuration of the analysis model, the initial condition, the boundary condition, etc., and have been determined in a heuristic manner. The procedures to estimate such multiple parameters consist of the following three steps: a set of training data, which is produced over a number of non‐linear finite element computations, is prepared; a neural network is trained using the data set; the neural network is used as a tool for searching the appropriate values of multiple parameters of the non‐linear finite element analysis. The present procedures were tested for the parameter estimation of the augmented Lagrangian method for the steady‐state incompressible viscous flow analysis and the time step evaluation of the pseudo time‐dependent stress analysis for the incompressible inelastic structure.

Details

Engineering Computations, vol. 15 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Content available
Article

Abdellatif Moudafi

The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a…

Abstract

The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for mIN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

To view the access options for this content please click here
Article

Jéderson da Silva, Jucélio Tomás Pereira and Diego Amadeu F. Torres

The purpose of this paper is to propose a new scheme for obtaining acceptable solutions for problems of continuum topology optimization of structures, regarding the…

Abstract

Purpose

The purpose of this paper is to propose a new scheme for obtaining acceptable solutions for problems of continuum topology optimization of structures, regarding the distribution and limitation of discretization errors by considering h-adaptivity.

Design/methodology/approach

The new scheme encompasses, simultaneously, the solution of the optimization problem considering a solid isotropic microstructure with penalization (SIMP) and the application of the h-adaptive finite element method. An analysis of discretization errors is carried out using an a posteriori error estimator based on both the recovery and the abrupt variation of material properties. The estimate of new element sizes is computed by a new h-adaptive technique named “Isotropic Error Density Recovery”, which is based on the construction of the strain energy error density function together with the analytical solution of an optimization problem at the element level.

Findings

Two-dimensional numerical examples, regarding minimization of the structure compliance and constraint over the material volume, demonstrate the capacity of the methodology in controlling and equidistributing discretization errors, as well as obtaining a great definition of the void–material interface, thanks to the h-adaptivity, when compared with results obtained by other methods based on microstructure.

Originality/value

This paper presents a new technique to design a mesh made with isotropic triangular finite elements. Furthermore, this technique is applied to continuum topology optimization problems using a new iterative scheme to obtain solutions with controlled discretization errors, measured in terms of the energy norm, and a great resolution of the material boundary. Regarding the computational cost in terms of degrees of freedom, the present scheme provides approximations with considerable less error if compared to the optimization process on fixed meshes.

To view the access options for this content please click here
Article

Azra Nazir, Roohie Naaz Mir and Shaima Qureshi

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine…

Abstract

Purpose

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.

Design/methodology/approach

This review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.

Findings

DL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.

Originality/value

To the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article

Shamsuddin Ahmed

The proposed algorithm successfully optimizes complex error functions, which are difficult to differentiate, ill conditioned or discontinuous. It is a benchmark to…

Abstract

Purpose

The proposed algorithm successfully optimizes complex error functions, which are difficult to differentiate, ill conditioned or discontinuous. It is a benchmark to identify initial solutions in artificial neural network (ANN) training.

Design/methodology/approach

A multi‐directional ANN training algorithm that needs no derivative information is introduced as constrained one‐dimensional problem. A directional search vector examines the ANN error function in weight parameter space. The search vector moves in all possible directions to find minimum function value. The network weights are increased or decreased depending on the shape of the error function hyper surface such that the search vector finds descent directions. The minimum function value is thus determined. To accelerate the convergence of the algorithm a momentum search is designed. It avoids overshooting the local minimum.

Findings

The training algorithm is insensitive to the initial starting weights in comparison with the gradient‐based methods. Therefore, it can locate a relative local minimum from anywhere of the error surface. It is an important property of this training method. The algorithm is suitable for error functions that are discontinuous, ill conditioned or the derivative of the error function is not readily available. It improves over the standard back propagation method in convergence and avoids premature termination near pseudo local minimum.

Research limitations/implications

Classifications problems are efficiently classified when using this method but the complex time series in some instances slows convergence due to complexity of the error surface. Different ANN network structure can further be investigated to find the performance of the algorithm.

Practical implications

The search scheme moves along the valleys and ridges of the error function to trace minimum neighborhood. The algorithm only evaluates the error function. As soon as the algorithm detects flat surface of the error function, care is taken to avoid slow convergence.

Originality/value

The algorithm is efficient due to incorporation of three important methodologies. The first mechanism is the momentum search. The second methodology is the implementation of directional search vector in coordinate directions. The third procedure is the one‐dimensional search in constrained region to identify the self‐adaptive learning rates, to improve convergence.

Details

Kybernetes, vol. 39 no. 7
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article

K.H. Greenly

Ice formation constitutes a hazard to aircraft operation both on the ground and in flight. This article deals with the protection of aircraft against ice formation in…

Abstract

Ice formation constitutes a hazard to aircraft operation both on the ground and in flight. This article deals with the protection of aircraft against ice formation in flight, but does not consider the counter measures which must be taken on the ground in winter conditions. The first part of the paper deals with the atmospheric conditions which give rise to ice accretion on forward facing surfaces and the types of ice which form at various ambient temperatures. A general survey is then made on methods of solving the problem and the weight and power penalties which they entail. Finally some recent developments in the electrical deicing systems arc reviewed.

Details

Aircraft Engineering and Aerospace Technology, vol. 35 no. 4
Type: Research Article
ISSN: 0002-2667

To view the access options for this content please click here
Article

K.C. LAM, TIESONG HU, S.O. CHEUNG, R.K.K. YUEN and Z.M. DENG

Modelling of the multiproject cash flow decisions in a contracting firm facilitates optimal resource utilization, financial planning, profit forecasting and enables the…

Abstract

Modelling of the multiproject cash flow decisions in a contracting firm facilitates optimal resource utilization, financial planning, profit forecasting and enables the inclusion of cash‐flow liquidity in forecasting. However, a great challenge for contracting firm to manage his multiproject cash flow when large and multiple construction projects are involved (manipulate large amount of resources, e.g. labour, plant, material, cost, etc.). In such cases, the complexity of the problem, hence the constraints involved, renders most existing regular optimization techniques computationally intractable within reasonable time frames. This limit inhibits the ability of contracting firms to complete construction projects at maximum efficiency through efficient utilization of resources among projects. Recently, artificial neural networks have demonstrated its strength in solving many optimization problems efficiently. In this regard a novel recurrent‐neural‐network model that integrates multi‐objective linear programming and neural network (MOLPNN) techniques has been developed. The model was applied to a relatively large contracting company running 10 projects concurrently in Hong Kong. The case study verified the feasibility and applicability of the MOLPNN to the defined problem. A comparison undertaken of two optimal schedules (i.e. risk‐avoiding scheme A and risk‐seeking scheme B) of cash flow based on the decision maker's preference is described in this paper.

Details

Engineering, Construction and Architectural Management, vol. 8 no. 2
Type: Research Article
ISSN: 0969-9988

Keywords

To view the access options for this content please click here
Article

Emad Khorshid, Abdulaziz Alfadli and Abdulazim Falah

The purpose of this paper is to present numerical experimentation of three constraint detection methods to explore their main features and drawbacks in infeasibility…

Abstract

Purpose

The purpose of this paper is to present numerical experimentation of three constraint detection methods to explore their main features and drawbacks in infeasibility detection during the design process.

Design/methodology/approach

Three detection methods (deletion filter, additive method and elasticity method) are used to find the minimum intractable subsystem of constraints in conflict. These methods are tested with four enhanced NLP solvers (sequential quadratic program, multi-start sequential quadratic programing, global optimization solver and genetic algorithm method).

Findings

The additive filtering method with both the multistart sequential quadratic programming and the genetic algorithm solvers is the most efficient method in terms of computation time and accuracy of detecting infeasibility. Meanwhile, the elasticity method has the worst performance.

Research limitations/implications

The research has been carried out for only inequality constraints and continuous design variables. This research work could be extended to develop computer-aided graphical user interface with the capability of including equality constraints and discrete variables.

Practical implications

These proposed methods have great potential for finding and guiding the designer to detect the infeasibility for ill-posed complex design problems.

Originality/value

The application of the proposed infeasibility detection methods with their four enhanced solvers on several mechanical design problems reduces the number of constraints to be checked from full set to a much smaller subset.

Details

Journal of Engineering, Design and Technology, vol. 16 no. 2
Type: Research Article
ISSN: 1726-0531

Keywords

To view the access options for this content please click here
Article

Alexander D. Klose and Andreas H. Hielscher

This paper sets out to give an overview about state‐of‐the‐art optical tomographic image reconstruction algorithms that are based on the equation of radiative transfer (ERT).

Abstract

Purpose

This paper sets out to give an overview about state‐of‐the‐art optical tomographic image reconstruction algorithms that are based on the equation of radiative transfer (ERT).

Design/methodology/approach

An objective function, which describes the discrepancy between measured and numerically predicted light intensity data on the tissue surface, is iteratively minimized to find the unknown spatial distribution of the optical parameters or sources. At each iteration step, the predicted partial current is calculated by a forward model for light propagation based on the ERT. The equation of radiative is solved with either finite difference or finite volume methods.

Findings

Tomographic reconstruction algorithms based on the ERT accurately recover the spatial distribution of optical tissue properties and light sources in biological tissue. These tissues either can have small geometries/large absorption coefficients, or can contain void‐like inclusions.

Originality/value

These image reconstruction methods can be employed in small animal imaging for monitoring blood oxygenation, in imaging of tumor growth, in molecular imaging of fluorescent and bioluminescent probes, in imaging of human finger joints for early diagnosis of rheumatoid arthritis, and in functional brain imaging.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 18 no. 3/4
Type: Research Article
ISSN: 0961-5539

Keywords

1 – 10 of 838