Search results

1 – 10 of over 7000
Article
Publication date: 8 June 2021

Tianshu Li, Shukai Duan, Jun Liu and Lidan Wang

Stochastic computing which is an alternative method of the binary calculation has key merits such as fault-tolerant capability and low hardware cost. However, the hardware…

Abstract

Purpose

Stochastic computing which is an alternative method of the binary calculation has key merits such as fault-tolerant capability and low hardware cost. However, the hardware response time of it is required to be very fast due to its bit-wise calculation mode. While the complementary metal oxide semiconductor (CMOS) components are difficult to meet the requirements aforementioned. For this, the stochastic computing implementation scheme based on the memristive system is proposed to reduce the response time. The purpose of this paper is to provide the implementation scheme based memristive system for the stochastic computing.

Design/methodology/approach

The hardware structure of material logic based on the memristive system is realized according to the advantages of the memristor. After that, the scheme of NOT logic, AND logic and multiplexer are designed, which are the basic units of stochastic computing. Furthermore, a stochastic computing system based on memristive combinational logic is structured and its validity is verified successfully by operating a case.

Findings

The numbers of the elements of the proposed stochastic computing system are less than the conventional stochastic computing based on CMOS circuits.

Originality/value

The paper proposed a novel implementation scheme for stochastic computing based on the memristive systems, which are different from the conventional stochastic computing based on CMOS circuits.

Details

Circuit World, vol. 48 no. 3
Type: Research Article
ISSN: 0305-6120

Keywords

Article
Publication date: 3 October 2018

Mourad Guettiche and Hamamache Kheddouci

The purpose of this paper is to study a multiple-origin-multiple-destination variant of dynamic critical nodes detection problem (DCNDP) and dynamic critical links detection…

Abstract

Purpose

The purpose of this paper is to study a multiple-origin-multiple-destination variant of dynamic critical nodes detection problem (DCNDP) and dynamic critical links detection problem (DCLDP) in stochastic networks. DCNDP and DCLDP consist of identifying the subset of nodes and links, respectively, whose deletion maximizes the stochastic shortest paths between all origins–destinations pairs, in the graph modeling the transport network. The identification of such nodes (or links) helps to better control the road traffic and predict the necessary measures to avoid congestion.

Design/methodology/approach

A Markovian decision process is used to model the shortest path problem under dynamic traffic conditions. Effective algorithms to determine the critical nodes (links) while considering the dynamicity of the traffic network are provided. Also, sensitivity analysis toward capacity reduction for critical links is studied. Moreover, the complexity of the underlying algorithms is analyzed and the computational efficiency resulting from the decomposition operation of the network into communities is highlighted.

Findings

The numerical results demonstrate that the use of dynamic shortest path (time dependency) as a metric has a significant impact on the identification of critical nodes/links and the experiments conducted on real world networks highlight the importance of sensitive links to dynamically detect critical links and elaborate smart transport plans.

Research limitations/implications

The research in this paper also revealed several challenges, which call for future investigations. First, the authors have restricted our experimentation to a small network where the only focus is on the model behavior, in the absence of historical data. The authors intend to extend this study to very large network using real data. Second, the authors have considered only congestion to assess network’s criticality; future research on this topic may include other factors, mainly vulnerability.

Practical implications

Taking into consideration the dynamic and stochastic nature in problem modeling enables to be effective tools for real-time control of transportation networks. This leads to design optimized smart transport plans particularly in disaster management, to improve the emergency evacuation effeciency.

Originality/value

The paper provides a novel approach to solve critical nodes/links detection problems. In contrast to the majority of research works in the literature, the proposed model considers dynamicity and betweenness while taking into account the stochastic aspect of transport networks. This enables the approach to guide the traffic and analyze transport networks mainly under disaster conditions in which networks become highly dynamic.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 12 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 13 July 2010

S.P. Joy Vasantha Rani and K. Aruna Prabha

The purpose of this paper is to implement the hardware structure for radial basis function (RBF) neural network based on stochastic logic computation.

Abstract

Purpose

The purpose of this paper is to implement the hardware structure for radial basis function (RBF) neural network based on stochastic logic computation.

Design/methodology/approach

The hardware implementation of artificial neural networks (ANNs) has a complicated structure and is normally space consuming due to huge size of digital multiplication, addition/subtraction, non‐linear activation function, etc. Also the unavailability of ANN hardware at an attractive price limits its use for real time applications. In stochastic logic theory, the real numbers are converted to random streams of bits instead of a binary number. The performance of the proposed structure is analyzed using very high speed integrated circuit hardware description language.

Findings

Stochastic theory‐based arithmetic and logic approach provides a way to carry out complex computation with very simple hardware and very flexible design of the system. The Gaussian RBF for hidden layer neuron is employed using stochastic counter that reduces the hardware resources significantly. The number of hidden layer neurons in RBF neural network structure is adaptively varied to make it an intelligent system.

Originality/value

The paper outlines the stochastic neural computation on digital hardware for implementing radial basis neural network. The structure has considered the optimized usage of hardware resources.

Details

Journal of Engineering, Design and Technology, vol. 8 no. 2
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 12 June 2017

Khaoula Chikhaoui, Noureddine Bouhaddi, Najib Kacem, Mohamed Guedri and Mohamed Soula

The purpose of this paper is to develop robust metamodels, which allow propagating parametric uncertainties, in the presence of localized nonlinearities, with reduced cost and…

Abstract

Purpose

The purpose of this paper is to develop robust metamodels, which allow propagating parametric uncertainties, in the presence of localized nonlinearities, with reduced cost and without significant loss of accuracy.

Design/methodology/approach

The proposed metamodels combine the generalized polynomial chaos expansion (gPCE) for the uncertainty propagation and reduced order models (ROMs). Based on the computation of deterministic responses, the gPCE requires prohibitive computational time for large-size finite element models, large number of uncertain parameters and presence of nonlinearities. To overcome this issue, a first metamodel is created by combining the gPCE and a ROM based on the enrichment of the truncated Ritz basis using static residuals taking into account the stochastic and nonlinear effects. The extension to the Craig–Bampton approach leads to a second metamodel.

Findings

Implementing the metamodels to approximate the time responses of a frame and a coupled micro-beams structure containing localized nonlinearities and stochastic parameters permits to significantly reduce computation cost with acceptable loss of accuracy, with respect to the reference Latin Hypercube Sampling method.

Originality/value

The proposed combination of the gPCE and the ROMs leads to a computationally efficient and accurate tool for robust design in the presence of parametric uncertainties and localized nonlinearities.

Details

Engineering Computations, vol. 34 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 15 April 2020

Chandra Shekhar, Amit Gupta, Madhu Jain and Neeraj Kumar

The purpose of this paper is to present a sensitivity analysis of fault-tolerant redundant repairable computing systems with imperfect coverage, reboot and recovery process.

Abstract

Purpose

The purpose of this paper is to present a sensitivity analysis of fault-tolerant redundant repairable computing systems with imperfect coverage, reboot and recovery process.

Design/methodology/approach

In this investigation, the authors consider the computing system having a finite number of identical working units functioning simultaneously with the provision of standby units. Working and standby units are prone to random failure in nature and are administered by unreliable software, which is also likely to unpredictable failure. The redundant repairable computing system is modeled as a Markovian machine interference problem with exponentially distributed failure rates and service rates. To excerpt the failed unit from the computing system, the system either opts randomized reboot process or leads to recovery delay.

Findings

Transient-state probabilities have been determined with which the authors develop various reliability measures, namely reliability/availability, mean time to failure, failure frequency, and so on, and queueing characteristics, namely expected number of failed units, the throughput of the system and so on, for the predictive purpose. To spectacle the practicability of the developed model, a numerical simulation, sensitivity analysis and so on for different parameters have also been done, and the results are summarized in the tables and graphs. The transient results are helpful to analyze the developing model of the system before having the stability of the system. The derived measures give direct insights into parametric decision-making.

Social implications

The conclusion has been drawn, and future scope is remarked. The present research study would help system analyst and system designer to make a better choice/decision in order to have the economical design and strategy based on the desired mean time to failure, reliability/availability of the systems and other queueing characteristics.

Originality/value

Different from previous investigations, this studied model provides a more accurate assessment of the computing system compared to uncertain environments based on sensitivity analysis.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 6/7
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 14 November 2012

Xia Yan, Kai Zhang, Mudassir Nawaz and Sanchit Rai

In reservoir history matching the least square objective function is usually used to minimize the mismatch between the predicted production data and the observations. However, as…

Abstract

In reservoir history matching the least square objective function is usually used to minimize the mismatch between the predicted production data and the observations. However, as history matching is an ill-posed inverse problem with non-unique solutions, the reservoir model after calibrating may be far from the real geology model by only matching the production data. In order to solve this problem, a regularization method for reservoir history matching is implemented, in which not only the production data is matched, but prior geological information is also used to correct and update the current reservoir model so that the updated model will be consistent with the geologic model. In this paper, the simultaneous perturbation stochastic approximation method (SPSA) coupled with fast streamline simulation provides an effective method (SLSPSA) to optimize the objective function. As a stochastic approximation algorithm, SLSPSA can guarantee the convergence of the algorithm. Compared to the gradient-based algorithms, it avoids the massive calculation and storage for adjoint or sensitivity matrix. In the calculation process of algorithm, parallel computing is implemented, which reduces the simulation time and improves the computational efficiency. The method was verified by matching an example test.

Article
Publication date: 4 September 2018

Muhannad Aldosary, Jinsheng Wang and Chenfeng Li

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in…

Abstract

Purpose

This paper aims to provide a comprehensive review of uncertainty quantification methods supported by evidence-based comparison studies. Uncertainties are widely encountered in engineering practice, arising from such diverse sources as heterogeneity of materials, variability in measurement, lack of data and ambiguity in knowledge. Academia and industries have long been researching for uncertainty quantification (UQ) methods to quantitatively account for the effects of various input uncertainties on the system response. Despite the rich literature of relevant research, UQ is not an easy subject for novice researchers/practitioners, where many different methods and techniques coexist with inconsistent input/output requirements and analysis schemes.

Design/methodology/approach

This confusing status significantly hampers the research progress and practical application of UQ methods in engineering. In the context of engineering analysis, the research efforts of UQ are most focused in two largely separate research fields: structural reliability analysis (SRA) and stochastic finite element method (SFEM). This paper provides a state-of-the-art review of SRA and SFEM, covering both technology and application aspects. Moreover, unlike standard survey papers that focus primarily on description and explanation, a thorough and rigorous comparative study is performed to test all UQ methods reviewed in the paper on a common set of reprehensive examples.

Findings

Over 20 uncertainty quantification methods in the fields of structural reliability analysis and stochastic finite element methods are reviewed and rigorously tested on carefully designed numerical examples. They include FORM/SORM, importance sampling, subset simulation, response surface method, surrogate methods, polynomial chaos expansion, perturbation method, stochastic collocation method, etc. The review and comparison tests comment and conclude not only on accuracy and efficiency of each method but also their applicability in different types of uncertainty propagation problems.

Originality/value

The research fields of structural reliability analysis and stochastic finite element methods have largely been developed separately, although both tackle uncertainty quantification in engineering problems. For the first time, all major uncertainty quantification methods in both fields are reviewed and rigorously tested on a common set of examples. Critical opinions and concluding remarks are drawn from the rigorous comparative study, providing objective evidence-based information for further research and practical applications.

Details

Engineering Computations, vol. 35 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 21 September 2022

Dmitrij Celov and Mariarosaria Comunale

Recently, star variables and the post-crisis nature of cyclical fluctuations have attracted a great deal of interest. In this chapter, the authors investigate different methods of

Abstract

Recently, star variables and the post-crisis nature of cyclical fluctuations have attracted a great deal of interest. In this chapter, the authors investigate different methods of assessing business cycles (BCs) for the European Union in general and the euro area in particular. First, the authors conduct a Monte Carlo (MC) experiment using a broad spectrum of univariate trend-cycle decomposition methods. The simulation aims to examine the ability of the analysed methods to find the observed simulated cycle with structural properties similar to actual macroeconomic data. For the simulation, the authors used the structural model’s parameters calibrated to the euro area’s real gross domestic product (GDP) and unemployment rate. The simulation outcomes indicate the sufficient composition of the suite of models (SoM) consisting of popular Hodrick–Prescott, Christiano–Fitzgerald and structural trend-cycle-seasonal filters, then used for the real application. The authors find that: (i) there is a high level of model uncertainty in comparing the estimates; (ii) growth rate (acceleration) cycles have often the worst performances, but they could be useful as early-warning predictors of turning points in growth and BCs; and (iii) the best-performing MC approaches provide a reasonable combination as the SoM. When swings last less time and/or are smaller, it is easier to pick a good alternative method to the suite to capture the BC for real GDP. Second, the authors estimate the BCs for real GDP and unemployment data varying from 1995Q1 to 2020Q4 (GDP) or 2020Q3 (unemployment), ending up with 28 cycles per country. This analysis also confirms that the BCs of euro area members are quite synchronized with the aggregate euro area. Some major differences can be found, however, especially in the case of periphery and new member states, with the latter improving in terms of coherency after the global financial crisis. The German cycles are among the cyclical movements least synchronized with the aggregate euro area.

Article
Publication date: 5 September 2018

Ramzi Lajili, Olivier Bareille, Mohamed Lamjed Bouazizi, Mohamed Ichchou and Noureddine Bouhaddi

This paper aims to propose numerical-based and experiment-based identification processes, accounting for uncertainties to identify structural parameters, in a wave propagation…

Abstract

Purpose

This paper aims to propose numerical-based and experiment-based identification processes, accounting for uncertainties to identify structural parameters, in a wave propagation framework.

Design/methodology/approach

A variant of the inhomogeneous wave correlation (IWC) method is proposed. It consists on identifying the propagation parameters, such as the wavenumber and the wave attenuation, from the frequency response functions. The latters can be computed numerically or experimentally. The identification process is thus called numerical-based or experiment-based, respectively. The proposed variant of the IWC method is then combined with the Latin hypercube sampling method for uncertainty propagation. Stochastic processes are consequently proposed allowing more realistic identification.

Findings

The proposed variant of the IWC method permits to identify accurately the propagation parameters of isotropic and composite beams, whatever the type of the identification process in which it is included: numerical-based or experiment-based. Its efficiency is proved with respect to an analytical model and the Mc Daniel method, considered as reference. The application of the stochastic identification processes shows good agreement between simulation and experiment-based results and that all identified parameters are affected by uncertainties, except damping.

Originality/value

The proposed variant of the IWC method is an accurate alternative for structural identification on wide frequency ranges. Numerical-based identification process can reduce experiments’ cost without significant loss of accuracy. Statistical investigations of the randomness of identified parameters illustrate the robustness of identification against uncertainties.

Article
Publication date: 3 July 2017

Dimitrios Chronopoulos, Manuel Collet and Mohamed Ichchou

This paper aims to present the development of a numerical continuum-discrete approach for computing the sensitivity of the waves propagating in periodic composite structures. The…

Abstract

Purpose

This paper aims to present the development of a numerical continuum-discrete approach for computing the sensitivity of the waves propagating in periodic composite structures. The work can be directly used for evaluating the sensitivity of the structural dynamic performance with respect to geometric and layering structural modifications.

Design/methodology/approach

A structure of arbitrary layering and geometric complexity is modelled using solid finite element (FE). A generic expression for computing the variation of the mass and the stiffness matrices of the structure with respect to the material and geometric characteristics is hereby given. The sensitivity of the structural wave properties can thus be numerically determined by computing the variability of the corresponding eigenvalues for the resulting eigenproblem. The exhibited approach is validated against the finite difference method as well as analytical results.

Findings

An intense wavenumber dependence is observed for the sensitivity results of a sandwich structure. This exhibits the importance and potential of the presented tool with regard to the optimization of layered structures for specific applications. The model can also be used for computing the effect of the inclusion of smart layers such as auxetics and piezoelectrics.

Originality/value

The paper presents the first continuum-discrete approach specifically developed for accurately and efficiently computing the sensitivity of the wave propagation data for periodic composite structures irrespective of their size. The considered structure can be of arbitrary layering and material characteristics as FE modelling is used.

Details

Engineering Computations, vol. 34 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of over 7000