Search results

1 – 10 of 32
Open Access
Article
Publication date: 3 August 2020

Abdellatif Moudafi

The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m

Abstract

The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for mIN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 28 August 2021

Slawomir Koziel and Anna Pietrenko-Dabrowska

A novel framework for expedited antenna optimization with an iterative prediction-correction scheme is proposed. The methodology is comprehensively validated using three…

Abstract

Purpose

A novel framework for expedited antenna optimization with an iterative prediction-correction scheme is proposed. The methodology is comprehensively validated using three real-world antenna structures: narrow-band, dual-band and wideband, optimized under various design scenarios.

Design/methodology/approach

The keystone of the proposed approach is to reuse designs pre-optimized for various sets of performance specifications and to encode them into metamodels that render good initial designs, as well as an initial estimate of the antenna response sensitivities. Subsequent design refinement is realized using an iterative prediction-correction loop accommodating the discrepancies between the actual and target design specifications.

Findings

The presented framework is capable of yielding optimized antenna designs at the cost of just a few full-wave electromagnetic simulations. The practical importance of the iterative correction procedure has been corroborated by benchmarking against gradient-only refinement. It has been found that the incorporation of problem-specific knowledge into the optimization framework greatly facilitates parameter adjustment and improves its reliability.

Research limitations/implications

The proposed approach can be a viable tool for antenna optimization whenever a certain number of previously obtained designs are available or the designer finds the initial effort of their gathering justifiable by intended re-use of the procedure. The future work will incorporate response features technology for improving the accuracy of the initial approximation of antenna response sensitivities.

Originality/value

The proposed optimization framework has been proved to be a viable tool for cost-efficient and reliable antenna optimization. To the knowledge, this approach to antenna optimization goes beyond the capabilities of available methods, especially in terms of efficient utilization of the existing knowledge, thus enabling reliable parameter tuning over broad ranges of both operating conditions and material parameters of the structure of interest.

Details

Engineering Computations, vol. 38 no. 10
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 24 October 2022

Babak Lotfi and Bengt Ake Sunden

This study aims to computational numerical simulations to clarify and explore the influences of periodic cellular lattice (PCL) morphological parameters – such as lattice…

1159

Abstract

Purpose

This study aims to computational numerical simulations to clarify and explore the influences of periodic cellular lattice (PCL) morphological parameters – such as lattice structure topology (simple cubic, body-centered cubic, z-reinforced body-centered cubic [BCCZ], face-centered cubic and z-reinforced face-centered cubic [FCCZ] lattice structures) and porosity value ( ) – on the thermal-hydraulic characteristics of the novel trussed fin-and-elliptical tube heat exchanger (FETHX), which has led to a deeper understanding of the superior heat transfer enhancement ability of the PCL structure.

Design/methodology/approach

A three-dimensional computational fluid dynamics (CFD) model is proposed in this paper to provide better understanding of the fluid flow and heat transfer behavior of the PCL structures in the trussed FETHXs associated with different structure topologies and high-porosities. The flow governing equations of the trussed FETHX are solved by the CFD software ANSYS CFX® and use the Menter SST turbulence model to accurately predict flow characteristics in the fluid flow region.

Findings

The thermal-hydraulic performance benchmarks analysis – such as field synergy performance and performance evaluation criteria – conducted during this research successfully identified demonstrates that if the high porosity of all PCL structures decrease to 92%, the best thermal-hydraulic performance is provided. Overall, according to the obtained outcomes, the trussed FETHX with the advantages of using BCCZ lattice structure at 92% porosity presents good thermal-hydraulic performance enhancement among all the investigated PCL structures.

Originality/value

To the best of the authors’ knowledge, this paper is one of the first in the literature that provides thorough thermal-hydraulic characteristics of a novel trussed FETHX with high-porosity PCL structures.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 19 August 2020

Ahmed Berkane and Abdallah Bradji

We consider, as discretization in space, the nonconforming mesh developed in SUSHI (Scheme Using Stabilization and Hybrid Interfaces) developed in Eymard et al. (2010) for a…

Abstract

We consider, as discretization in space, the nonconforming mesh developed in SUSHI (Scheme Using Stabilization and Hybrid Interfaces) developed in Eymard et al. (2010) for a semi-linear heat equation. The time discretization is performed using a uniform mesh. We are concerned with a nonlinear scheme that has been studied in Bradji (2016) in the context of the general framework GDM (Gradient Discretization Method) (Droniou et al., 2018) which includes SUSHI. We provide sufficient conditions on the size of the spatial mesh and the time step which allow to prove a W1,(L2)-error estimate. This error estimate can be viewed as an improvement for the W1,2(L2)-error estimate proved in Bradji (2016). The W1,(L2)-error estimate we want to prove in this note was stated without proof in Bradji (2016, Remark 7.2, Page 1302). Its proof is based on a comparison with an appropriately chosen auxiliary finite volume scheme along with the derivation of some new estimates on its solution.

Details

Arab Journal of Mathematical Sciences, vol. 27 no. 1
Type: Research Article
ISSN: 1319-5166

Keywords

Open Access
Article
Publication date: 19 November 2021

Łukasz Knypiński

The purpose of this paper is to execute the efficiency analysis of the selected metaheuristic algorithms (MAs) based on the investigation of analytical functions and investigation…

1218

Abstract

Purpose

The purpose of this paper is to execute the efficiency analysis of the selected metaheuristic algorithms (MAs) based on the investigation of analytical functions and investigation optimization processes for permanent magnet motor.

Design/methodology/approach

A comparative performance analysis was conducted for selected MAs. Optimization calculations were performed for as follows: genetic algorithm (GA), particle swarm optimization algorithm (PSO), bat algorithm, cuckoo search algorithm (CS) and only best individual algorithm (OBI). All of the optimization algorithms were developed as computer scripts. Next, all optimization procedures were applied to search the optimal of the line-start permanent magnet synchronous by the use of the multi-objective objective function.

Findings

The research results show, that the best statistical efficiency (mean objective function and standard deviation [SD]) is obtained for PSO and CS algorithms. While the best results for several runs are obtained for PSO and GA. The type of the optimization algorithm should be selected taking into account the duration of the single optimization process. In the case of time-consuming processes, algorithms with low SD should be used.

Originality/value

The new proposed simple nondeterministic algorithm can be also applied for simple optimization calculations. On the basis of the presented simulation results, it is possible to determine the quality of the compared MAs.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 41 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 8 April 2020

Isabel María Parra Oller, Salvador Cruz Rambaud and María del Carmen Valls Martínez

The main purpose of this paper is to determine the discount function which better fits the individuals' preferences through the empirical analysis of the different functions used…

3467

Abstract

Purpose

The main purpose of this paper is to determine the discount function which better fits the individuals' preferences through the empirical analysis of the different functions used in the field of intertemporal choice.

Design/methodology/approach

After an in-depth revision of the existing literature and unlike most studies which only focus on exponential and hyperbolic discounting, this manuscript compares the adjustment of data to six different discount functions. To do this, the analysis is based on the usual statistical methods, and the non-linear least squares regression, through the algorithm of Gauss-Newton, in order to estimate the models' parameters; finally, the AICc method is used to compare the significance of the six proposed models.

Findings

This paper shows that the so-called q-exponential function deformed by the amount is the model which better explains the individuals' preferences on both delayed gains and losses. To the extent of the authors' knowledge, this is the first time that a function different from the general hyperbola fits better to the individuals' preferences.

Originality/value

This paper contributes to the search of an alternative model able to explain the individual behavior in a more realistic way.

Details

European Journal of Management and Business Economics, vol. 30 no. 1
Type: Research Article
ISSN: 2444-8451

Keywords

Open Access
Article
Publication date: 16 March 2020

Slawomir Koziel and Adrian Bekasiewicz

The purpose of this paper is to exploit a database of pre-existing designs to accelerate parametric optimization of antenna structures is investigated.

3460

Abstract

Purpose

The purpose of this paper is to exploit a database of pre-existing designs to accelerate parametric optimization of antenna structures is investigated.

Design/methodology/approach

The usefulness of pre-existing designs for rapid design of antennas is investigated. The proposed approach exploits the database existing antenna base designs to determine a good starting point for structure optimization and its response sensitivities. The considered method is suitable for handling computationally expensive models, which are evaluated using full-wave electromagnetic (EM) simulations. Numerical case studies are provided demonstrating the feasibility of the framework for the design of real-world structures.

Findings

The use of pre-existing designs enables rapid identification of a good starting point for antenna optimization and speeds-up estimation of the structure response sensitivities. The base designs can be arranged into subsets (simplexes) in the objective space and used to represent the target vector, i.e. the starting point for structure design. The base closest base point w.r.t. the initial design can be used to initialize Jacobian for local optimization. Moreover, local optimization costs can be reduced through the use of Broyden formula for Jacobian updates in consecutive iterations.

Research limitations/implications

The study investigates the possibility of reusing pre-existing designs for the acceleration of antenna optimization. The proposed technique enables the identification of a good starting point and reduces the number of expensive EM simulations required to obtain the final design.

Originality/value

The proposed design framework proved to be useful for the identification of good initial design and rapid optimization of modern antennas. Identification of the starting point for the design of such structures is extremely challenging when using conventional methods involving parametric studies or repetitive local optimizations. The presented methodology proved to be a useful design and geometry scaling tool when previously obtained designs are available for the same antenna structure.

Details

Engineering Computations, vol. 37 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 5 June 2023

Elias Shohei Kamimura, Anderson Rogério Faia Pinto and Marcelo Seido Nagano

This paper aims to present a literature review of the most recent optimisation methods applied to Credit Scoring Models (CSMs).

2440

Abstract

Purpose

This paper aims to present a literature review of the most recent optimisation methods applied to Credit Scoring Models (CSMs).

Design/methodology/approach

The research methodology employed technical procedures based on bibliographic and exploratory analyses. A traditional investigation was carried out using the Scopus, ScienceDirect and Web of Science databases. The papers selection and classification took place in three steps considering only studies in English language and published in electronic journals (from 2008 to 2022). The investigation led up to the selection of 46 publications (10 presenting literature reviews and 36 proposing CSMs).

Findings

The findings showed that CSMs are usually formulated using Financial Analysis, Machine Learning, Statistical Techniques, Operational Research and Data Mining Algorithms. The main databases used by the researchers were banks and the University of California, Irvine. The analyses identified 48 methods used by CSMs, the main ones being: Logistic Regression (13%), Naive Bayes (10%) and Artificial Neural Networks (7%). The authors conclude that advances in credit score studies will require new hybrid approaches capable of integrating Big Data and Deep Learning algorithms into CSMs. These algorithms should have practical issues considered consider practical issues for improving the level of adaptation and performance demanded for the CSMs.

Practical implications

The results of this study might provide considerable practical implications for the application of CSMs. As it was aimed to demonstrate the application of optimisation methods, it is highly considerable that legal and ethical issues should be better adapted to CSMs. It is also suggested improvement of studies focused on micro and small companies for sales in instalment plans and commercial credit through the improvement or new CSMs.

Originality/value

The economic reality surrounding credit granting has made risk management a complex decision-making issue increasingly supported by CSMs. Therefore, this paper satisfies an important gap in the literature to present an analysis of recent advances in optimisation methods applied to CSMs. The main contribution of this paper consists of presenting the evolution of the state of the art and future trends in studies aimed at proposing better CSMs.

Details

Journal of Economics, Finance and Administrative Science, vol. 28 no. 56
Type: Research Article
ISSN: 2077-1886

Keywords

Open Access
Article
Publication date: 9 June 2021

Jin Gi Kim, Hyun-Tak Lee and Bong-Gyu Jang

This paper examines whether the successful bid rate of the OnBid public auction, published by Korea Asset Management Corporation, can identify and forecast the Korea…

Abstract

Purpose

This paper examines whether the successful bid rate of the OnBid public auction, published by Korea Asset Management Corporation, can identify and forecast the Korea business-cycle expansion and contraction regimes characterized by the OECD reference turning points. We use logistic regression and support vector machine in performing the OECD regime classification and predicting three-month-ahead regime. We find that the OnBid auction rate conveys important information for detecting the coincident and future regimes because this information might be closely related to deleveraging regarding default on debt obligations. This finding suggests that corporate managers and investors could use the auction information to gauge the regime position in their decision-making. This research has an academic significance that reveals the relationship between the auction market and the business-cycle regimes.

Details

Journal of Derivatives and Quantitative Studies: 선물연구, vol. 29 no. 2
Type: Research Article
ISSN: 1229-988X

Keywords

Open Access
Article
Publication date: 19 January 2024

Fuzhao Chen, Zhilei Chen, Qian Chen, Tianyang Gao, Mingyan Dai, Xiang Zhang and Lin Sun

The electromechanical brake system is leading the latest development trend in railway braking technology. The tolerance stack-up generated during the assembly and production…

Abstract

Purpose

The electromechanical brake system is leading the latest development trend in railway braking technology. The tolerance stack-up generated during the assembly and production process catalyzes the slight geometric dimensioning and tolerancing between the motor stator and rotor inside the electromechanical cylinder. The tolerance leads to imprecise brake control, so it is necessary to diagnose the fault of the motor in the fully assembled electromechanical brake system. This paper aims to present improved variational mode decomposition (VMD) algorithm, which endeavors to elucidate and push the boundaries of mechanical synchronicity problems within the realm of the electromechanical brake system.

Design/methodology/approach

The VMD algorithm plays a pivotal role in the preliminary phase, employing mode decomposition techniques to decompose the motor speed signals. Afterward, the error energy algorithm precision is utilized to extract abnormal features, leveraging the practical intrinsic mode functions, eliminating extraneous noise and enhancing the signal’s fidelity. This refined signal then becomes the basis for fault analysis. In the analytical step, the cepstrum is employed to calculate the formant and envelope of the reconstructed signal. By scrutinizing the formant and envelope, the fault point within the electromechanical brake system is precisely identified, contributing to a sophisticated and accurate fault diagnosis.

Findings

This paper innovatively uses the VMD algorithm for the modal decomposition of electromechanical brake (EMB) motor speed signals and combines it with the error energy algorithm to achieve abnormal feature extraction. The signal is reconstructed according to the effective intrinsic mode functions (IMFS) component of removing noise, and the formant and envelope are calculated by cepstrum to locate the fault point. Experiments show that the empirical mode decomposition (EMD) algorithm can effectively decompose the original speed signal. After feature extraction, signal enhancement and fault identification, the motor mechanical fault point can be accurately located. This fault diagnosis method is an effective fault diagnosis algorithm suitable for EMB systems.

Originality/value

By using this improved VMD algorithm, the electromechanical brake system can precisely identify the rotational anomaly of the motor. This method can offer an online diagnosis analysis function during operation and contribute to an automated factory inspection strategy while parts are assembled. Compared with the conventional motor diagnosis method, this improved VMD algorithm can eliminate the need for additional acceleration sensors and save hardware costs. Moreover, the accumulation of online detection functions helps improve the reliability of train electromechanical braking systems.

1 – 10 of 32