Search results

1 – 10 of over 9000
Article
Publication date: 30 September 2019

Slawomir Koziel and Anna Pietrenko-Dabrowska

A technique for accelerated design optimization of antenna input characteristics is developed and comprehensively validated using real-world wideband antenna structures…

Abstract

Purpose

A technique for accelerated design optimization of antenna input characteristics is developed and comprehensively validated using real-world wideband antenna structures. Comparative study using a conventional trust-region algorithm is provided. Investigations of the effects of the algorithm control parameters are also carried out.

Design/methodology/approach

An optimization methodology is introduced that replaces finite differentiation (FD) by a combination of FD and selectively used Broyden updating formula for antenna response Jacobian estimations. The updating formula is used for directions that are sufficiently well aligned with the design relocation that occurred in the most recent algorithm iteration. This allows for a significant reduction of the number of full-wave electromagnetic simulations necessary for the algorithm to converge; hence, it leads to the reduction of the overall design cost.

Findings

Incorporation of the updating formulas into the Jacobian estimation process in a selective manner considerably reduces the computational cost of the optimization process without compromising the design quality. The algorithm proposed in the study can be used to speed up direct optimization of the antenna structures as well as surrogate-assisted procedures involving variable-fidelity models.

Research limitations/implications

This study sets a direction for further studies on accelerating procedures for the local optimization of antenna structures. Further investigations on the effects of the control parameters on the algorithm performance are necessary along with the development of means to automate the algorithm setup for a particular antenna structure, especially from the point of view of the search space dimensionality.

Originality/value

The proposed algorithm proved useful for a reduced-cost optimization of antennas and has been demonstrated to outperform conventional algorithms. To the authors’ knowledge, this is one of the first attempts to address the problem in this manner. In particular, it goes beyond traditional approaches, especially by combining various sensitivity estimation update measures in an adaptive fashion.

Article
Publication date: 26 August 2014

JaeHoon Lim, SangJoon Shin, Vaitla Laxman, Junemo Kim and JinSeok Jang

– The purpose of the present paper is to obtain the capability of designing modern rotorcrafts with enhanced accuracy and reliability.

Abstract

Purpose

The purpose of the present paper is to obtain the capability of designing modern rotorcrafts with enhanced accuracy and reliability.

Design/methodology/approach

Among the existing rotorcraft design programs, an appropriate program was selected as a baseline for improvement. It was based on a database comprising conventional fleets of rotorcrafts. The baseline program was not robust because it contained a simple iteration loop, which only monitored the gross weight of the aircraft. Therefore, it is not accurate enough to fulfill the quality and sophistication of a conceptual design framework useful for present and future generations of rotorcrafts. In this paper, the estimation formulas for the sizing and weight of the rotorcraft subsystem were updated by referring to modern aircraft data. In addition, trend curves for various turboshaft engines available these days were established. Instead of using the power estimation algorithm based on the momentum theory with empirical corrections, blade element rotor aerodynamics and trim analysis were developed and incorporated into the present framework. Moreover, the simple iteration loop for the aircraft gross weight was reinforced by adding a mathematical optimization algorithm, such as a genetic algorithm.

Findings

The improved optimization framework for rotorcraft conceptual design which has the capability of designing modern rotorcrafts with enhanced accuracy and reliability was constructed by using MATLAB optimization toolbox.

Practical implications

The optimization framework can be used by the rotorcraft industries at an early stage of the rotorcraft design.

Originality/value

It was verified that the improved optimization framework for the rotorcraft conceptual design has the capability of designing modern rotorcrafts with enhanced accuracy and reliability.

Details

Aircraft Engineering and Aerospace Technology: An International Journal, vol. 86 no. 5
Type: Research Article
ISSN: 0002-2667

Keywords

Article
Publication date: 1 May 1999

Bozidar Sarler and Jure Mencinger

The axisymmetric steady‐state convective‐diffusive thermal field problem associated with direct‐chill, semi‐continuously cast billets has been solved using the dual reciprocity…

Abstract

The axisymmetric steady‐state convective‐diffusive thermal field problem associated with direct‐chill, semi‐continuously cast billets has been solved using the dual reciprocity boundary element method. The solution is based on a formulation which incorporates the one‐phase physical model, Laplace equation fundamental solution weighting, and scaled augmented thin plate splines for transforming the domain integrals into a finite series of boundary integrals. Realistic non‐linear boundary conditions and temperature variation of all material properties are included. The solution is verified by comparison with the results of the classical finite volume method. Results for a 0.500[m] diameter Al 4.5 per cent Cu alloy billet at typical casting conditions are given.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 9 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 24 September 2021

Xue Deng and Yingxian Lin

The weighted evaluation function method with normalized objective functions is used to transform the proposed multi-objective model into a single objective one, which reflects the…

Abstract

Purpose

The weighted evaluation function method with normalized objective functions is used to transform the proposed multi-objective model into a single objective one, which reflects the investors' preference for returns, risks and social responsibility by adjusting the weights. Finally, an example is given to illustrate the solution steps of the model and the effectiveness of the algorithm.

Design/methodology/approach

Based on the possibility theory, assuming that the future returns of each asset are trapezoidal fuzzy numbers, a mean-variance-Yager entropy-social responsibility model is constructed including piecewise linear transaction costs and risk-free assets. The model proposed in this paper includes six constraints, the investment proportion sum, the non-negativity proportion, the ceiling and floor, the pre-assignment, the cardinality and the round lot constraints. In addition, considering the special round lot constraint, the proposed model is transformed into an integer programming problem.

Findings

The effects of different constraints and transaction costs on the effective frontier of the portfolio are analyzed, which not only assists investors to make decisions close to their expectations by setting appropriate parameters but also provides constructive suggestions through the overall performance of each asset.

Originality/value

There are two improvements in the improved particle swarm optimization algorithm: one is that the complex constraints are specifically satisfied by using a renewable 0–1 random constraint matrix and random scaling factors instead of fixed ones; the other is eliminating the particles with poor fitness and randomly adding some new particles that satisfy all the constraints to achieve the goal of global search as much as possible.

Article
Publication date: 6 February 2024

Han Wang, Quan Zhang, Zhenquan Fan, Gongcheng Wang, Pengchao Ding and Weidong Wang

To solve the obstacle detection problem in robot autonomous obstacle negotiation, this paper aims to propose an obstacle detection system based on elevation maps for three types…

Abstract

Purpose

To solve the obstacle detection problem in robot autonomous obstacle negotiation, this paper aims to propose an obstacle detection system based on elevation maps for three types of obstacles: positive obstacles, negative obstacles and trench obstacles.

Design/methodology/approach

The system framework includes mapping, ground segmentation, obstacle clustering and obstacle recognition. The positive obstacle detection is realized by calculating its minimum rectangle bounding boxes, which includes convex hull calculation, minimum area rectangle calculation and bounding box generation. The detection of negative obstacles and trench obstacles is implemented on the basis of information absence in the map, including obstacles discovery method and type confirmation method.

Findings

The obstacle detection system has been thoroughly tested in various environments. In the outdoor experiment, with an average speed of 22.2 ms, the system successfully detected obstacles with a 95% success rate, indicating the effectiveness of the detection algorithm. Moreover, the system’s error range for obstacle detection falls between 4% and 6.6%, meeting the necessary requirements for obstacle negotiation in the next stage.

Originality/value

This paper studies how to solve the obstacle detection problem when the robot obstacle negotiation.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 9 April 2024

Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…

Abstract

Purpose

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.

Design/methodology/approach

In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.

Findings

On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.

Originality/value

In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 28 September 2021

Nageswara Rao Eluri, Gangadhara Rao Kancharla, Suresh Dara and Venkatesulu Dondeti

Gene selection is considered as the fundamental process in the bioinformatics field. The existing methodologies pertain to cancer classification are mostly clinical basis, and its…

Abstract

Purpose

Gene selection is considered as the fundamental process in the bioinformatics field. The existing methodologies pertain to cancer classification are mostly clinical basis, and its diagnosis capability is limited. Nowadays, the significant problems of cancer diagnosis are solved by the utilization of gene expression data. The researchers have been introducing many possibilities to diagnose cancer appropriately and effectively. This paper aims to develop the cancer data classification using gene expression data.

Design/methodology/approach

The proposed classification model involves three main phases: “(1) Feature extraction, (2) Optimal Feature Selection and (3) Classification”. Initially, five benchmark gene expression datasets are collected. From the collected gene expression data, the feature extraction is performed. To diminish the length of the feature vectors, optimal feature selection is performed, for which a new meta-heuristic algorithm termed as quantum-inspired immune clone optimization algorithm (QICO) is used. Once the relevant features are selected, the classification is performed by a deep learning model called recurrent neural network (RNN). Finally, the experimental analysis reveals that the proposed QICO-based feature selection model outperforms the other heuristic-based feature selection and optimized RNN outperforms the other machine learning methods.

Findings

The proposed QICO-RNN is acquiring the best outcomes at any learning percentage. On considering the learning percentage 85, the accuracy of the proposed QICO-RNN was 3.2% excellent than RNN, 4.3% excellent than RF, 3.8% excellent than NB and 2.1% excellent than KNN for Dataset 1. For Dataset 2, at learning percentage 35, the accuracy of the proposed QICO-RNN was 13.3% exclusive than RNN, 8.9% exclusive than RF and 14.8% exclusive than NB and KNN. Hence, the developed QICO algorithm is performing well in classifying the cancer data using gene expression data accurately.

Originality/value

This paper introduces a new optimal feature selection model using QICO and QICO-based RNN for effective classification of cancer data using gene expression data. This is the first work that utilizes an optimal feature selection model using QICO and QICO-RNN for effective classification of cancer data using gene expression data.

Case study
Publication date: 23 June 2021

Patama Sangwongwanich and Winai Wongsurawat

Teaching objectives are as follows: students need to understand the critical choices involved in introducing a product into a new market, including but not limited to the…

Abstract

Learning Outcomes

Teaching objectives are as follows: students need to understand the critical choices involved in introducing a product into a new market, including but not limited to the macroeconomic context, the target consumer segment, the positioning of the product, distribution channels, pricing and promotion strategy. Students must learn to appreciate the importance of anticipating the reaction of incumbents, and how such reactions may determine the success or failure of a new product entry into the market. Students develop skills to analyze complementarities between different distribution channels and understand how investments in developing one channel can result in positive or negative consequences in other channels.

Case Overview/Synopsis

How can health products such as multivitamins and other nutritional supplements make headway into emerging markets that are moving up the ranks of middle-income economies? This case study investigates the case of Thailand, a country that in the early 1990s registered a per capita income comparable to Vietnam and Laos and Cambodia today. It illustrates, through the real experience of Pat – an executive of a local subsidiary of an American multinational pharmaceutical company – how a new entrant exploited the rapidly changing economic and retailing environment to become a successful player in an important and growing segment of consumer products.

Complexity Academic Level

This case is suitable for master’s degree students or short-course executives.

Supplementary materials

Teaching Notes are available for educators only.

Subject code

CSS 11: Strategy.

Details

Emerald Emerging Markets Case Studies, vol. 11 no. 2
Type: Case Study
ISSN: 2045-0621

Keywords

Article
Publication date: 19 June 2019

Xin Liu, Hang Zhang, Pengbo Zhu, Xianqiang Yang and Zhiwei Du

This paper aims to investigate an identification strategy for the nonlinear state-space model (SSM) in the presence of an unknown output time-delay. The equations to estimate the…

Abstract

Purpose

This paper aims to investigate an identification strategy for the nonlinear state-space model (SSM) in the presence of an unknown output time-delay. The equations to estimate the unknown model parameters and output time-delay are derived simultaneously in the proposed strategy.

Design/methodology/approach

The unknown integer-valued time-delay is processed as a latent variable which is uniformly distributed in a priori known range. The estimations of the unknown time-delay and model parameters are both realized using the Expectation-Maximization (EM) algorithm, which has a good performance in dealing with latent variable issues. Moreover, the particle filter (PF) with an unknown time-delay is introduced to calculated the Q-function of the EM algorithm.

Findings

Although amounts of effective approaches for nonlinear SSM identification have been developed in the literature, the problem of time-delay is not considered in most of them. The time-delay is commonly existed in industrial scenario and it could cause extra difficulties for industrial process modeling. The problem of unknown output time-delay is considered in this paper, and the validity of the proposed approach is demonstrated through the numerical example and a two-link manipulator system.

Originality/value

The novel approach to identify the nonlinear SSM in the presence of an unknown output time-delay with EM algorithm is put forward in this work.

Book part
Publication date: 30 December 2004

Stephen M. Stohs and Jeffrey T. LaFrance

A common feature of certain kinds of data is a high level of statistical dependence across space and time. This spatial and temporal dependence contains useful information that…

Abstract

A common feature of certain kinds of data is a high level of statistical dependence across space and time. This spatial and temporal dependence contains useful information that can be exploited to significantly reduce the uncertainty surrounding local distributions. This chapter develops a methodology for inferring local distributions that incorporates these dependencies. The approach accommodates active learning over space and time, and from aggregate data and distributions to disaggregate individual data and distributions. We combine data sets on Kansas winter wheat yields – annual county-level yields over the period from 1947 through 2000 for all 105 counties in the state of Kansas, and 20,720 individual farm-level sample moments, based on ten years of the reported actual production histories for the winter wheat yields of farmers participating in the United States Department of Agriculture Federal Crop Insurance Corporation Multiple Peril Crop Insurance Program in each of the years 1991–2000. We derive a learning rule that combines statewide, county, and local farm-level data using Bayes’ rule to estimate the moments of individual farm-level crop yield distributions. Information theory and the maximum entropy criterion are used to estimate farm-level crop yield densities from these moments. These posterior densities are found to substantially reduce the bias and volatility of crop insurance premium rates.

Details

Spatial and Spatiotemporal Econometrics
Type: Book
ISBN: 978-0-76231-148-4

1 – 10 of over 9000