Search results
1 – 10 of over 47000Young-Min Kwon, Sung-Boo Hong, Jae-Sang Park and Yu-Been Lee
The purpose of this study is to use the individual blade pitch control (IBC), reduce actively both the rotor hub vibratory loads and airframe vibration responses for the…
Abstract
Purpose
The purpose of this study is to use the individual blade pitch control (IBC), reduce actively both the rotor hub vibratory loads and airframe vibration responses for the lift-offset compound helicopter at a high-speed flight condition.
Design/methodology/approach
The Sikorsky X2 technology demonstrator (X2TD) is used as the lift-offset compound helicopter. The X2TD lift-offset rotor is modelled and its rotor hub vibratory loads at a flight speed of 250 knots are predicted using a rotorcraft comprehensive analysis code, CAMRAD II, and the airframe structural dynamics is represented with a finite element analysis code, MSC.NASTRAN. When the propulsive trim methodology is applied for rotor trim, the best input condition for IBC using multiple harmonic inputs is searched to reduce the rotor vibration, while the rotor aerodynamic performance (the rotor effective lift-to-drag ratio) is improved or maintained at least. Finally, the reduction in airframe vibration responses is investigated when the best input condition for IBC with multiple harmonics is applied to the lift-offset rotor.
Findings
When the IBC with the single harmonic input using the 2/rev actuation frequency, amplitude of 2° and control phase angle of 120° (2P/2°/120°) is considered for X2TD rotor, the rotor vibration is reduced by about 26.37% only and the rotor effective lift-to-drag ratio increases slightly by 0.98%. When X2TD rotor uses the IBC with multiple harmonic inputs (2P/2°/45° + 5P/1°/90°), the rotor hub vibratory loads and airframe vibration responses are reduced by 44.69% and from 0.48 to 79.10%, respectively, while rotor effective lift-to-drag ratio is improved by 0.77%, as compared to the baseline without IBC.
Originality/value
This study is the first study to use the 2/rev actuation for IBC to the four-bladed lift-offset coaxial rotor and to investigate to obtain simultaneously the rotor vibration reduction, rotor performance improvement and airframe vibration reduction, using IBC with multiple harmonic inputs.
Details
Keywords
Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed…
Abstract
Purpose
Due to the large-size, non-uniform transactions per day, the money laundering detection (MLD) is a time-consuming and difficult process. The major purpose of the proposed auto-regressive (AR) outlier-based MLD (AROMLD) is to reduce the time consumption for handling large-sized non-uniform transactions.
Design/methodology/approach
The AR-based outlier design produces consistent asymptotic distributed results that enhance the demand-forecasting abilities. Besides, the inter-quartile range (IQR) formulations proposed in this paper support the detailed analysis of time-series data pairs.
Findings
The prediction of high-dimensionality and the difficulties in the relationship/difference between the data pairs makes the time-series mining as a complex task. The presence of domain invariance in time-series mining initiates the regressive formulation for outlier detection. The deep analysis of time-varying process and the demand of forecasting combine the AR and the IQR formulations for an effective outlier detection.
Research limitations/implications
The present research focuses on the detection of an outlier in the previous financial transaction, by using the AR model. Prediction of the possibility of an outlier in future transactions remains a major issue.
Originality/value
The lack of prior segmentation of ML detection suffers from dimensionality. Besides, the absence of boundary to isolate the normal and suspicious transactions induces the limitations. The lack of deep analysis and the time consumption are overwhelmed by using the regression formulation.
Details
Keywords
Aleena Swetapadma, Tishya Manna and Maryam Samami
A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the…
Abstract
Purpose
A novel method has been proposed to reduce the false alarm rate of arrhythmia patients regarding life-threatening conditions in the intensive care unit. In this purpose, the atrial blood pressure, photoplethysmogram (PLETH), electrocardiogram (ECG) and respiratory (RESP) signals are considered as input signals.
Design/methodology/approach
Three machine learning approaches feed-forward artificial neural network (ANN), ensemble learning method and k-nearest neighbors searching methods are used to detect the false alarm. The proposed method has been implemented using Arduino and MATLAB/SIMULINK for real-time ICU-arrhythmia patients' monitoring data.
Findings
The proposed method detects the false alarm with an accuracy of 99.4 per cent during asystole, 100 per cent during ventricular flutter, 98.5 per cent during ventricular tachycardia, 99.6 per cent during bradycardia and 100 per cent during tachycardia. The proposed framework is adaptive in many scenarios, easy to implement, computationally friendly and highly accurate and robust with overfitting issue.
Originality/value
As ECG signals consisting with PQRST wave, any deviation from the normal pattern may signify some alarming conditions. These deviations can be utilized as input to classifiers for the detection of false alarms; hence, there is no need for other feature extraction techniques. Feed-forward ANN with the Lavenberg–Marquardt algorithm has shown higher rate of convergence than other neural network algorithms which helps provide better accuracy with no overfitting.
Details
Keywords
Md. Sazol Ahmmed, Md. Faisal Arif and Md. Mosharraf Hossain
Solid waste (SW) is the result of rapid urbanization and industrialization, and is increasing day by day by the increasing number of population. This thesis paper emphasizes on…
Abstract
Purpose
Solid waste (SW) is the result of rapid urbanization and industrialization, and is increasing day by day by the increasing number of population. This thesis paper emphasizes on the prediction of SW generation in the city of Dhaka and finding sustainable pathways for minimizing the gaps in the existing system.
Design/methodology/approach
In this paper, the survey of different questionnaires of the Dhaka South City Corporation (DSCC) was conducted. The data of SW generation, for few years of each month, in the city of Dhaka were collected to develop a model named Artificial Neural Network (ANN). The ANN model was used for the accurate prediction of SW generation.
Findings
At first, by using the ANN model with the one hidden layer and changing the number of neurons of the layer different models were created and tested. Finally, according to R values (training, test, all) the structure with six neurons in the hidden layer was selected as the suitable model. Finally, six gaps were found in the existing system of solid waste management (SWM) in the DSCC. These gaps are the main barrier for the better SWM.
Originality/value
The authors propose that the best model for prediction is 12-6-3, and its training and testing results are given as 0.9972 and 0.80380, respectively. So the resulting prediction is so much close in comparison with actual data. In this paper, the opportunities of those gaps are provided for working properly and the DSCC will find the better result in the aspect of SW problem.
Details
Keywords
Y.K. Shobha and H.G. Rangaraju
The suggested work examines the latest developments such as the techniques employed for allocation of power, browser techniques, modern analysis and bandwidth efficiency of…
Abstract
Purpose
The suggested work examines the latest developments such as the techniques employed for allocation of power, browser techniques, modern analysis and bandwidth efficiency of nonorthogonal multiple accesses (NOMA) in the network of 5G. Furthermore, the proposed work also illustrates the performance of NOMA when it is combined with various techniques of wireless communication namely network coding, multiple-input multiple-output (MIMO), space-time coding, collective communications, as well as many more. In the case of the MIMO system, the proposed research work specifically deals with a less complex recursive linear minimum mean square error (LMMSE) multiuser detector along with NOMA (MIMO-NOMA); here the multiple-antenna base station (BS) and multiple single-antenna users interact with each other instantaneously. Although LMMSE is a linear detector with a low intricacy, it performs poorly in multiuser identification because of the incompatibility between LMMSE identification and multiuser decoding. Thus, to obtain a desirable iterative identification rate, the proposed research work presents matching constraints among the decoders and identifiers of MIMO-NOMA.
Design/methodology/approach
To improve the performance in 5G technologies as well as in cellular communication, the NOMA technique is employed and contemplated as one of the best methodologies for accessing radio. The above-stated technique offers several advantages such as enhanced spectrum performance in contrast to the high-capacity orthogonal multiple access (OMA) approach that is also known as orthogonal frequency division multiple access (OFDMA). Code and power domain are some of the categories of the NOMA technique. The suggested research work mainly concentrates on the technique of NOMA, which is based on the power domain. This approach correspondingly makes use of superposition coding (SC) as well as successive interference cancellation (SIC) at source and recipient. For the fifth-generation applications, the network-level, as well as user-experienced data rate prerequisites, are successfully illustrated by various researchers.
Findings
The suggested combined methodology such as MIMO-NOMA demonstrates a synchronized iterative LMMSE system that can accomplish the optimized efficiency of symmetric MIMO NOMA with several users. To transmit the information from sender to the receiver, hybrid methodologies are confined to 2 × 2 as well as 4 × 4 antenna arrays, and thereby parameters such as PAPR, BER, SNR are analyzed and efficiency for various modulation strategies such as BPSK and QAMj (j should vary from 8,16,32,64) are computed.
Originality/value
The proposed hybrid MIMO-NOMA methodologies are synchronized in terms of iterative process for optimization of LMMSE that can accomplish the optimized efficiency of symmetric for several users under different noisy conditions. From the obtained simulated results, it is found, there are 18%, 23% 16%, and 8% improvement in terms of Bit Error Rate (BER), Least Minimum Mean Squared Error (LMMSE), Peak to Average Power Ratio (PAPR), and capacity of channel respectively for Binary Phase Shift Key (BPSK) and Quadrature Amplitude Modulation (QAM) modulation techniques.
Details
Keywords
Geng Cui, Man Leung Wong, Guichang Zhang and Lin Li
The purpose of this paper is to assess the performance of competing methods and model selection, which are non‐trivial issues given the financial implications. Researchers have…
Abstract
Purpose
The purpose of this paper is to assess the performance of competing methods and model selection, which are non‐trivial issues given the financial implications. Researchers have adopted various methods including statistical models and machine learning methods such as neural networks to assist decision making in direct marketing. However, due to the different performance criteria and validation techniques currently in practice, comparing different methods is often not straightforward.
Design/methodology/approach
This study compares the performance of neural networks with that of classification and regression tree, latent class models and logistic regression using three criteria – simple error rate, area under the receiver operating characteristic curve (AUROC), and cumulative lift – and two validation methods, i.e. bootstrap and stratified k‐fold cross‐validation. Systematic experiments are conducted to compare their performance.
Findings
The results suggest that these methods vary in performance across different criteria and validation methods. Overall, neural networks outperform the others in AUROC value and cumulative lifts, and the stratified ten‐fold cross‐validation produces more accurate results than bootstrap validation.
Practical implications
To select predictive models to support direct marketing decisions, researchers need to adopt appropriate performance criteria and validation procedures.
Originality/value
The study addresses the key issues in model selection, i.e. performance criteria and validation methods, and conducts systematic analyses to generate the findings and practical implications.
Details
Keywords
Gilad Chen, John E Mathieu and Paul D Bliese
Organizational researchers have become increasingly interested in multi-level constructs – that is, constructs that are meaningful at multiple levels of analysis. However, despite…
Abstract
Organizational researchers have become increasingly interested in multi-level constructs – that is, constructs that are meaningful at multiple levels of analysis. However, despite the plethora of theoretical and empirical work on multi-level topics, explicit frameworks for validation of multi-level constructs have yet to be fully developed. Moreover, available principles for conducting construct validation assume that the construct resides at a single level of analysis. We propose a five-step framework for conceptualizing and testing multi-level constructs by integrating principles of construct validation with recent advancements in multi-level theory, research, and methodology. The utility of the framework is illustrated using theoretical and empirical examples.
Eugene Yujun Fu, Hong Va Leong, Grace Ngai and Stephen C.F. Chan
Social signal processing under affective computing aims at recognizing and extracting useful human social interaction patterns. Fight is a common social interaction in real life…
Abstract
Purpose
Social signal processing under affective computing aims at recognizing and extracting useful human social interaction patterns. Fight is a common social interaction in real life. A fight detection system finds wide applications. This paper aims to detect fights in a natural and low-cost manner.
Design/methodology/approach
Research works on fight detection are often based on visual features, demanding substantive computation and good video quality. In this paper, the authors propose an approach to detect fight events through motion analysis. Most existing works evaluated their algorithms on public data sets manifesting simulated fights, where the fights are acted out by actors. To evaluate real fights, the authors collected videos involving real fights to form a data set. Based on the two types of data sets, the authors evaluated the performance of their motion signal analysis algorithm, which was then compared with the state-of-the-art approach based on MoSIFT descriptors with Bag-of-Words mechanism, and basic motion signal analysis with Bag-of-Words.
Findings
The experimental results indicate that the proposed approach accurately detects fights in real scenarios and performs better than the MoSIFT approach.
Originality/value
By collecting and annotating real surveillance videos containing real fight events and augmenting with well-known data sets, the authors proposed, implemented and evaluated a low computation approach, comparing it with the state-of-the-art approach. The authors uncovered some fundamental differences between real and simulated fights and initiated a new study in discriminating real against simulated fight events, with very good performance.
Details
Keywords
Shi‐Woei Lin and Chih‐Hsing Cheng
The purpose of this paper is to compare various linear opinion pooling models for aggregating probability judgments and to determine whether Cooke's performance weighting model…
Abstract
Purpose
The purpose of this paper is to compare various linear opinion pooling models for aggregating probability judgments and to determine whether Cooke's performance weighting model can sift out better calibrated experts and produce better aggregated distribution.
Design/methodology/approach
The leave‐one‐out cross‐validation technique is adopted to perform an out‐of‐sample comparison of Cooke's classical model, the equal weight linear pooling method, and the best expert approach.
Findings
Both aggregation models significantly outperform the best expert approach, indicating the need for inputs from multiple experts. The performance score for Cooke's classical model drops considerably in out‐of‐sample analysis, indicating that Cooke's performance weight approach might have been slightly overrated before, and the performance weight aggregation method no longer dominantly outperforms the equal weight linear opinion pool.
Research limitations/implications
The results show that using seed questions to sift out better calibrated experts may still be a feasible approach. However, because the superiority of Cooke's model as discussed in previous studies can no longer be claimed, whether the cost of extra efforts used in generating and evaluating seed questions is justifiable remains a question.
Originality/value
Understanding the performance of various models for aggregating experts' probability judgments is critical for decision and risk analysis. Furthermore, the leave‐one‐out cross‐validation technique used in this study achieves more objective evaluations than previous studies.
Details