Search results

1 – 10 of 625
Article
Publication date: 21 August 2019

Yavar Safaei Mehrabani, Mehdi Bagherizadeh, Mohammad Hossein Shafiabadi and Abolghasem Ghasempour

This paper aims to present an inexact 4:2 compressor cell using carbon nanotube filed effect transistors (CNFETs).

Abstract

Purpose

This paper aims to present an inexact 4:2 compressor cell using carbon nanotube filed effect transistors (CNFETs).

Design/methodology/approach

To design this cell, the capacitive threshold logic (CTL) has been used.

Findings

To evaluate the proposed cell, comprehensive simulations are carried out at two levels of the circuit and image processing. At the circuit level, the HSPICE software has been used and the power consumption, delay, and power-delay product are calculated. Also, the power-delaytransistor count product (PDAP) is used to make a compromise between all metrics. On the other hand, the Monte Carlo analysis has been used to scrutinize the robustness of the proposed cell against the variations in the manufacturing process. The results of simulations at this level of abstraction indicate the superiority of the proposed cell to other circuits. At the application level, the MATLAB software is also used to evaluate the peak signal-to-noise ratio (PSNR) figure of merit. At this level, the two primary images are multiplied by a multiplier circuit consisting of 4:2 compressors. The results of this simulation also show the superiority of the proposed cell to others.

Originality/value

This cell significantly reduces the number of transistors and only consists of NOT gates.

Details

Circuit World, vol. 45 no. 3
Type: Research Article
ISSN: 0305-6120

Keywords

Article
Publication date: 16 October 2009

Chaozhong Wu, Gordon Huang, Xinping Yan, Yanpeng Cai, Yongping Li and Nengchao Lv

The purpose of this paper is to develop an interval method for vehicle allocation and route planning in case of an evacuation.

568

Abstract

Purpose

The purpose of this paper is to develop an interval method for vehicle allocation and route planning in case of an evacuation.

Design/methodology/approach

First, the evacuation route planning system is described and the notations are defined. An inexact programming model is proposed. The goal of the model is to achieve optimal planning of vehicles allocation with a minimized system time under the condition of inexact information. The constraints of the model include four types: number of vehicles constraint, passengers balance constraints, maximum capacity of links constraints and no negative constraints. The model is solved through the decomposition of the inexact model. A hypothetical case is developed to illustrate the proposed model.

Findings

The paper finds that the interval solutions are feasible and stable for evacuation model in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving evacuation managers' estimates under different conditions.

Originality/value

This method entails incorporation of uncertainties existing as interval values into model formulation and solution procedure, and application of the developed model and the related solution algorithm in a hypothetical case study.

Details

Kybernetes, vol. 38 no. 10
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 9 March 2020

Hamidreza Uoosefian, Keivan Navi, Reza Faghih Mirzaee and Mahdi Hosseinzadeh

The high demand for fast, energy-efficient, compact computational blocks in digital electronics has led the researchers to use approximate computing in applications where…

110

Abstract

Purpose

The high demand for fast, energy-efficient, compact computational blocks in digital electronics has led the researchers to use approximate computing in applications where inaccuracy of outputs is tolerable. The purpose of this paper is to present two ultra-high-speed current-mode approximate full adders (FA) by using carbon nanotube field-effect transistors.

Design/methodology/approach

Instead of using threshold detectors, which are common elements in current-mode logic, diodes are used to stabilize voltage. Zener diodes and ultra-low-power diodes are used within the first and second proposed designs, respectively. This innovation eliminates threshold detectors from critical path and makes it shorter. Then, the new adders are employed in the image processing application of Laplace filter, which detects edges in an image.

Findings

Simulation results demonstrate very high-speed operation for the first and second proposed designs, which are, respectively, 44.7 per cent and 21.6 per cent faster than the next high-speed adder cell. In addition, they make a reasonable compromise between power-delay product (PDP) and other important evaluating factors in the context of approximate computing. They have very few transistors and very low total error distance. In addition, they do not propagate error to higher bit positions by generating output carry correctly. According to the investigations, up to four inexact FA can be used in the Laplace filter computations without a significant image quality loss. The employment of the first and second proposed designs results in 42.4 per cent and 32.2 per cent PDP reduction compared to when no approximate FA are used in an 8-bit ripple adder.

Originality/value

Two new current-mode inexact FA are presented. They use diodes as voltage regulators to design current-mode approximate full-adders with very short critical path for the first time.

Details

Circuit World, vol. 46 no. 4
Type: Research Article
ISSN: 0305-6120

Keywords

Article
Publication date: 7 February 2022

Yavar Safaei Mehrabani, Mojtaba Maleknejad, Danial Rostami and HamidReza Uoosefian

Full adder cells are building blocks of arithmetic circuits and affect the performance of the entire digital system. The purpose of this study is to provide a low-power and…

44

Abstract

Purpose

Full adder cells are building blocks of arithmetic circuits and affect the performance of the entire digital system. The purpose of this study is to provide a low-power and high-performance full adder cell.

Design/methodology/approach

Approximate computing is a novel paradigm that is used to design low-power and high-performance circuits. In this paper, a novel 1-bit approximate full adder cell is presented using the combination of complementary metal-oxide-semiconductor, transmission gate and pass transistor logic styles.

Findings

Simulation results confirm the superiority of the proposed design in terms of power consumption and power–delay product (PDP) criteria compared to state-of-the-art circuits. Also, the proposed full adder cell is applied in an 8-bit ripple carry adder to accomplish image processing applications including image blending, motion detection and edge detection. The results confirm that the proposed cell has premier compromise and outperforms its counterparts.

Originality/value

The proposed cell consists of only 11 transistors and decreases the switching activity remarkably. Therefore, it is a low-power and low-PDP cell.

Details

Circuit World, vol. 49 no. 4
Type: Research Article
ISSN: 0305-6120

Keywords

Article
Publication date: 1 February 1994

Rod Cross

Considers two main strands of literature. The first deals with thetension between the falsificationist view of how economic know‐ledgecould or should be acquired, and the view…

597

Abstract

Considers two main strands of literature. The first deals with the tension between the falsificationist view of how economic know‐ledge could or should be acquired, and the view that economics is a separate, deductive science. The second concerns the metaphors used in economic analysis, the main contrast being between metaphors which involve homeostasis and time reversibility, and those that involve hysteresis and time irreversibility.

Details

Journal of Economic Studies, vol. 21 no. 1
Type: Research Article
ISSN: 0144-3585

Keywords

Article
Publication date: 18 December 2009

Mingshun Song, Xinghua Fang and Wei Wang

Under the prior information that upper and lower bounds of the random quantity are symmetric with respect to the best estimate, this paper analyses the Bayesian prior distribution…

Abstract

Under the prior information that upper and lower bounds of the random quantity are symmetric with respect to the best estimate, this paper analyses the Bayesian prior distribution assignment using the principle of maximum entropy. With the exact lower and upper bounds, it approves uniform for the probability density function of the quantity and it has a curvilinear trapezoidal form for the inexact lower and upper bounds.

Details

Asian Journal on Quality, vol. 10 no. 3
Type: Research Article
ISSN: 1598-2688

Keywords

Article
Publication date: 1 August 1999

T Kippenberger

Reflects on how organizations that are acknowledged leaders in the field of risk management, have achieved pre‐eminent positions. Acknowledges risk‐based auditing is still in its…

5983

Abstract

Reflects on how organizations that are acknowledged leaders in the field of risk management, have achieved pre‐eminent positions. Acknowledges risk‐based auditing is still in its formative stages and so identification of best practice is an inexact science — but there are three important trends: from control‐based to risk‐based auditing; use of scenario planning; and understanding that risks apply to soft assets. Uses a Table for extra emphasis and explanation.

Details

The Antidote, vol. 4 no. 3
Type: Research Article
ISSN: 1363-8483

Keywords

Article
Publication date: 3 July 2017

Saurabh Prabhu, Sez Atamturktur and Scott Cogan

This paper aims to focus on the assessment of the ability of computer models with imperfect functional forms and uncertain input parameters to represent reality.

109

Abstract

Purpose

This paper aims to focus on the assessment of the ability of computer models with imperfect functional forms and uncertain input parameters to represent reality.

Design/methodology/approach

In this assessment, both the agreement between a model’s predictions and available experiments and the robustness of this agreement to uncertainty have been evaluated. The concept of satisfying boundaries to represent input parameter sets that yield model predictions with acceptable fidelity to observed experiments has been introduced.

Findings

Satisfying boundaries provide several useful indicators for model assessment, and when calculated for varying fidelity thresholds and input parameter uncertainties, reveal the trade-off between the robustness to uncertainty in model parameters, the threshold for satisfactory fidelity and the probability of satisfying the given fidelity threshold. Using a controlled case-study example, important modeling decisions such as acceptable level of uncertainty, fidelity requirements and resource allocation for additional experiments are shown.

Originality/value

Traditional methods of model assessment are solely based on fidelity to experiments, leading to a single parameter set that is considered fidelity-optimal, which essentially represents the values which yield the optimal compensation between various sources of errors and uncertainties. Rather than maximizing fidelity, this study advocates for basing model assessment on the model’s ability to satisfy a required fidelity (or error tolerance). Evaluating the trade-off between error tolerance, parameter uncertainty and probability of satisfying this predefined error threshold provides us with a powerful tool for model assessment and resource allocation.

Details

Engineering Computations, vol. 34 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 January 1979

J.A. GOGUEN and E.S. SHAKET

This paper is a brief summary of work on fuzzy sets being conducted at UCLA, with an emphasis on work from Goguen's Fuzzy Robot Users Group. There is a brief summary of earlier…

Abstract

This paper is a brief summary of work on fuzzy sets being conducted at UCLA, with an emphasis on work from Goguen's Fuzzy Robot Users Group. There is a brief summary of earlier work upon which present work is based, a section of Fuzzy Robot Users Group results, a section on other work at UCLA, and a summary of work now in progress.

Details

Kybernetes, vol. 8 no. 1
Type: Research Article
ISSN: 0368-492X

Article
Publication date: 5 October 2015

Sez Atamturktur and Ismail Farajpour

Physical phenomena interact with each other in ways that one cannot be analyzed without considering the other. To account for such interactions between multiple phenomena…

Abstract

Purpose

Physical phenomena interact with each other in ways that one cannot be analyzed without considering the other. To account for such interactions between multiple phenomena, partitioning has become a widely implemented computational approach. Partitioned analysis involves the exchange of inputs and outputs from constituent models (partitions) via iterative coupling operations, through which the individually developed constituent models are allowed to affect each other’s inputs and outputs. Partitioning, whether multi-scale or multi-physics in nature, is a powerful technique that can yield coupled models that can predict the behavior of a system more complex than the individual constituents themselves. The paper aims to discuss these issues.

Design/methodology/approach

Although partitioned analysis has been a key mechanism in developing more realistic predictive models over the last decade, its iterative coupling operations may lead to the propagation and accumulation of uncertainties and errors that, if unaccounted for, can severely degrade the coupled model predictions. This problem can be alleviated by reducing uncertainties and errors in individual constituent models through further code development. However, finite resources may limit code development efforts to just a portion of possible constituents, making it necessary to prioritize constituent model development for efficient use of resources. Thus, the authors propose here an approach along with its associated metric to rank constituents by tracing uncertainties and errors in coupled model predictions back to uncertainties and errors in constituent model predictions.

Findings

The proposed approach evaluates the deficiency (relative degree of imprecision and inaccuracy), importance (relative sensitivity) and cost of further code development for each constituent model, and combines these three factors in a quantitative prioritization metric. The benefits of the proposed metric are demonstrated on a structural portal frame using an optimization-based uncertainty inference and coupling approach.

Originality/value

This study proposes an approach and its corresponding metric to prioritize the improvement of constituents by quantifying the uncertainties, bias contributions, sensitivity analysis, and cost of the constituent models.

Details

Engineering Computations, vol. 32 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 625