Search results

1 – 10 of 767
Article
Publication date: 8 August 2022

Lionel Dongmo Fouellefack, Lelanie Smith and Michael Kruger

A hybrid-electric unmanned aerial vehicle (HE-UAV) model has been developed to address the problem of low endurance of a small electric UAV. Electric-powered UAVs are not capable…

Abstract

Purpose

A hybrid-electric unmanned aerial vehicle (HE-UAV) model has been developed to address the problem of low endurance of a small electric UAV. Electric-powered UAVs are not capable of achieving a high range and endurance due to the low energy density of its batteries. Alternatively, conventional UAVs (cUAVs) using fuel with an internal combustion engine (ICE) produces more noise and thermal signatures which is undesirable, especially if the air vehicle is required to patrol at low altitudes and remain undetected by ground patrols. This paper aims to investigate the impact of implementing hybrid propulsion technology to improve on the endurance of the UAV (based on a 13.6 kg UAV).

Design/methodology/approach

A HE-UAV model is developed to analyze the fuel consumption of the UAV for given mission profiles which were then compared to a cUAV. Although, this UAV size was used as reference case study, it can potentially be used to analyze the fuel consumption of any fixed wing UAV of similar take-off weight. The model was developed in a Matlab-Simulink environment using Simulink built-in functionalities, including all the subsystem of the hybrid powertrain. That is, the ICE, electric motor, battery, DC-DC converter, fuel system and propeller system as well as the aerodynamic system of the UAV. In addition, a ruled-based supervisory controlled strategy was implemented to characterize the split between the two propulsive components (ICE and electric motor) during the UAV mission. Finally, an electrification scheme was implemented to account for the hybridization of the UAV during certain stages of flight. The electrification scheme was then varied by changing the time duration of the UAV during certain stages of flight.

Findings

Based on simulation, it was observed a HE-UAV could achieve a fuel saving of 33% compared to the cUAV. A validation study showed a predicted improved fuel consumption of 9.5% for the Aerosonde UAV.

Originality/value

The novelty of this work comes with the implementation of a rule-based supervisory controller to characterize the split between the two propulsive components during the UAV mission. Also, the model was created by considering steady flight during cruise, but not during the climb and descend segment of the mission.

Details

Aircraft Engineering and Aerospace Technology, vol. 95 no. 3
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 1 June 2000

P.Di Barba

Introduces papers from this area of expertise from the ISEF 1999 Proceedings. States the goal herein is one of identifying devices or systems able to provide prescribed…

Abstract

Introduces papers from this area of expertise from the ISEF 1999 Proceedings. States the goal herein is one of identifying devices or systems able to provide prescribed performance. Notes that 18 papers from the Symposium are grouped in the area of automated optimal design. Describes the main challenges that condition computational electromagnetism’s future development. Concludes by itemizing the range of applications from small activators to optimization of induction heating systems in this third chapter.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 19 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Book part
Publication date: 19 October 2020

Lixuan Zhang, Eric Smith and Andrea Gouldman

This study examines the impacts of three individual values on the willingness to pay and perceived fairness of use tax on Internet purchases. Analysis of survey data collected…

Abstract

This study examines the impacts of three individual values on the willingness to pay and perceived fairness of use tax on Internet purchases. Analysis of survey data collected from 114 taxpayers reveals that while a strong sense of national identity is significantly correlated with fairness perceptions of use tax, it is not significantly related to perception of willingness to pay use tax. Our findings suggest that taxpayers with a high level of religiosity are more willing to pay use tax, although they do not perceive the use tax to be fair.

Details

Advances in Taxation
Type: Book
ISBN: 978-1-83909-185-8

Keywords

Article
Publication date: 6 November 2017

Chaw Thet Zan and Hayato Yamana

The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized segments…

300

Abstract

Purpose

The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized segments. Each segment is represented by its mean value and mapped with an alphabet, where the number of adopted symbols is called alphabet size. Both parameters control data compression ratio and accuracy of time series mining tasks. Besides, optimal parameters selection highly depends on different application and data sets. In fact, these parameters are iteratively selected by analyzing entire data sets, which limits handling of the huge amount of time series and reduces the applicability of SAX.

Design/methodology/approach

The segment size is estimated based on Shannon sampling theorem (autoSAXSD_S) and adaptive hierarchical segmentation (autoSAXSD_M). As for the alphabet size, it is focused on how mean values of all the segments are distributed. The small number of alphabet size is set for large distribution to easily distinguish the difference among segments.

Findings

Experimental evaluation using University of California Riverside (UCR) data sets shows that the proposed schemes are able to select the parameters well with high classification accuracy and show comparable efficiency in comparison with state-of-the-art methods, SAX and auto_iSAX.

Originality/value

The originality of this paper is the way to find out the optimal parameters of SAX using the proposed estimation schemes. The first parameter segment size is automatically estimated on two approaches and the second parameter alphabet size is estimated on the most frequent average (mean) value among segments.

Details

International Journal of Web Information Systems, vol. 13 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 August 2005

Carmen Galvez, Félix de Moya‐Anegón and Víctor H. Solana

To propose a categorization of the different conflation procedures at the two basic approaches, non‐linguistic and linguistic techniques, and to justify the application of…

1323

Abstract

Purpose

To propose a categorization of the different conflation procedures at the two basic approaches, non‐linguistic and linguistic techniques, and to justify the application of normalization methods within the framework of linguistic techniques.

Design/methodology/approach

Presents a range of term conflation methods, that can be used in information retrieval. The uniterm and multiterm variants can be considered equivalent units for the purposes of automatic indexing. Stemming algorithms, segmentation rules, association measures and clustering techniques are well evaluated non‐linguistic methods, and experiments with these techniques show a wide variety of results. Alternatively, the lemmatisation and the use of syntactic pattern‐matching, through equivalence relations represented in finite‐state transducers (FST), are emerging methods for the recognition and standardization of terms.

Findings

The survey attempts to point out the positive and negative effects of the linguistic approach and its potential as a term conflation method.

Originality/value

Outlines the importance of FSTs for the normalization of term variants.

Details

Journal of Documentation, vol. 61 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 3 October 2016

Gopalakrishnan Narayanamurthy and Anand Gurumurthy

The purpose of this paper is to describe a leanness assessment methodology that takes into account the interaction between lean elements for computing the systemic leanness and…

4173

Abstract

Purpose

The purpose of this paper is to describe a leanness assessment methodology that takes into account the interaction between lean elements for computing the systemic leanness and for assisting continuous improvement of lean implementation.

Design/methodology/approach

Key elements determining the leanness level were identified by reviewing the relevant literature and were structured as a framework. Graph-theoretic approach (GTA) was used as the assessment methodology for its ability to evaluate the interaction between the elements in the developed framework.

Findings

Interactions between the lean elements were configured. Application of the proposed GTA for assessing systemic leanness was demonstrated. Scenario analysis was performed and a scale was developed to assist firms in comparing their systemic leanness index.

Research limitations/implications

This paper is unique in developing an assessment approach for measuring the systemic leanness. In addition, this study explains how the implementation of lean thinking (LT) in a value stream can be continuously improved by proposing a systemic leanness index that can be benchmarked. The proposed approach to measure systemic leanness can be tested across different value streams in future for extending its generalizability.

Practical implications

Proposed framework and leanness assessment approach presents an innovative tool for practitioners to capture the systemic aspect of LT. Proposed assessment approach supports practitioners in achieving continuous improvement in lean implementation by revealing the lean elements that need to be focused in future.

Originality/value

Study introduces a new perspective for LT by studying the importance of interactions between the lean elements and by incorporating them to assess the systemic leanness.

Details

Journal of Manufacturing Technology Management, vol. 27 no. 8
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 1 March 2013

Yuanwang Yang, Jingye Cai and Jose Schutt‐Aine

Spurious frequencies (spurs) resulting from phase truncation are one of the main signal integrity issues for direct digital synthesizers (DDS). The standard approach in DDS design…

Abstract

Purpose

Spurious frequencies (spurs) resulting from phase truncation are one of the main signal integrity issues for direct digital synthesizers (DDS). The standard approach in DDS design consists of truncating the phase word output from the phase accumulator, in order to minimize the size of the lookup table. This process generates spurs and degrades the quality of signals at the output of a DDS. In principle, since the bit width of the digital‐to‐analog converter (DAC) is narrower than that of the lookup table, the latter can be compressed without using phase truncation. The purpose of this paper is to propose a novel spur‐free truncation method for compressing the sine lookup table in a DDS structure.

Design/methodology/approach

In this paper, a novel spur‐free truncation method for compressing the sine lookup table in a DDS structure is proposed. First, the paper discusses the origins of spurs in direct digital synthesizers; next, methods for avoiding large sine lookup tables are analyzed to help generate spur‐free outputs via truncation.

Findings

By introducing a comparator and an adder into the traditional DDS architecture, the sine lookup table can be compressed without significant hardware change in the design. Simulation results using MATLAB and implementation results on FPGA evaluation platform show that the novel structure can eliminate the truncation spurs without increasing the size of the lookup table. Previous works on compression algorithm of lookup table are still available to the novel structure.

Originality/value

A novel approach for the reduction in size of digital synthesizer lookup table is proposed in this work. Size reduction is achieved without producing truncation spurs. The method exploits the property that the bit width of a DAC will generally be smaller than the bit width of a phase accumulator. A comparator or an adder are introduced to the traditional structure of a DDS to help achieve the size compression. Simulations in MATLAB verified that the novel structure can eliminate truncation spurs in the output signal without increasing the size of the lookup table.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Content available

Abstract

Details

Library Hi Tech News, vol. 19 no. 7
Type: Research Article
ISSN: 0741-9058

Article
Publication date: 29 November 2018

David C. Novak, James L. Sullivan, Jeremy Reed, Mladen Gagulic and Nick Van Den Berg

The ability to measure and assess “quality” is essential in building and maintaining a safe and effective transportation system. Attaining acceptable quality outcomes in…

Abstract

Purpose

The ability to measure and assess “quality” is essential in building and maintaining a safe and effective transportation system. Attaining acceptable quality outcomes in transportation projects has been a reoccurring problem at both the federal and state levels, at least partially, as a result of poorly developed, inefficient or nonexistent quality assurance/quality control (QA/QC) processes. The purpose of this paper is to develop and implement a new QA/QC process that focuses on a novel double-bounded performance-related specification (PRS) and corresponding pay factor policy that includes both lower and upper quality acceptance and payment reward boundaries for bridge concrete.

Design/methodology/approach

The authors use historical data to design different payment scenarios illustrating likely industry responses to the new PRS, and select the single scenario that best balances risk between the agency and industry. The authors then convert that payment scenario to a pay factor schedule using a search heuristic and determine statistical compliance with the PRS using percent-within-limits (PWL).

Findings

The methodology offers an innovative approach for developing an initial set of pay factors when lifecycle cost data are lacking and the PRS are new or modified. An important finding is that, with a double-bounded PRS, it is not possible to represent pay factors using the simplified table PWL currently employed in practice because each PWL value occupies two separate positions in the payment structure – one above the design target and one below it. Therefore, a more detailed set of pay factors must be employed which explicitly specify the mean sample value and the design target. The approach is demonstrated in practice for the Agency of Transportation in state of Vermont.

Research limitations/implications

The authors demonstrate a novel approach for developing a double-bounded PRS and introduce a payment incentive/disincentive policy with the goal of improving total product quality. The new pay factor policy includes both a payment penalty below the contracted price for failing to meet a specified performance criterion as well as a payment premium above the contracted price that increases as the sample product specification approaches an “ideal” design value. The PRS includes both an upper and lower acceptance boundary for the finished product as opposed to only a lower tail acceptance boundary, which is the traditional approach.

Practical implications

The authors illustrate a research collaboration between academia and a state agency that highlights the role academic research can play in advancing quality management practices. The study involves the use of actual product performance data and is operational as opposed to conceptual in nature. Finally, the authors offer important practical insights and guidance by demonstrating how a new PRS and pay factor policy can be developed without the use of site-specific historical lifecycle cost (LCC) data that include detailed manufacturing, producing and placement cost data, as data related to product performance over time. This is an important contribution, as the development and implementation of pay factor policies typically involve the use of historical LCC data. However, in many cases, these data are not available or may be incomplete.

Social implications

With the new PRS and pay factor schedule, the Agency expects shrinkage and cracking on bridge decks to decrease along with overall maintenance and rehabilitation costs. A major focus the new PRS is to actively involve industry partners in quality improvement efforts.

Originality/value

The authors focus on a major modification to an existing QA/QC process that involves the development of a new PRS and an associated pay factor policy undertaken by the Vermont Agency of Transportation. The authors use empirical data to develop a novel double bounded PRS and payment schedule for concrete and offer unique operational/practical insight and guidance by demonstrating how a new PRS and pay factor policy can be developed without the use of site-specific historical LCC. Typically, PRS for in-place concrete have only a lower tail acceptance boundary.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 31 July 2009

K.G. Verma, B.K. Kaushik and R. Singh

Process variation has become a major concern in the design of many nanometer circuits, including interconnect pipelines. The purpose of this paper is to provide a comprehensive…

Abstract

Purpose

Process variation has become a major concern in the design of many nanometer circuits, including interconnect pipelines. The purpose of this paper is to provide a comprehensive overview of types and sources of all aspects of interconnect process variations.

Design/methodology/approach

The impacts of these interconnect process variations on circuit delay and cross‐talk noises along with the two major sources of delays – parametric delay variations and global interconnect delays – have been discussed.

Findings

Parametric delay evaluation under process variation method avoids multiple parasitic extractions and multiple delay evaluations as is done in the traditional response surface method. This results in significant speedup. Furthermore, both systematic and random process variations have been contemplated. The systematic variations need to be experimentally modeled and calibrated while the random variations are inherent fluctuations in process parameters due to any reason in manufacturing and hence are non‐deterministic.

Originality/value

This paper usefully reviews process variation effects on very large‐scale integration (VLSI) interconnect.

Details

Microelectronics International, vol. 26 no. 3
Type: Research Article
ISSN: 1356-5362

Keywords

1 – 10 of 767