Search results

1 – 10 of 906
Article
Publication date: 2 November 2018

Seyed Reza Aali, Mohammad Reza Besmi and Mohammad Hosein Kazemi

The purpose of this paper is to study variation regularization with a positive sequence extraction-normalized least mean square (VRP-NLMS) algorithm for frequency estimation in a…

Abstract

Purpose

The purpose of this paper is to study variation regularization with a positive sequence extraction-normalized least mean square (VRP-NLMS) algorithm for frequency estimation in a three-phase electrical distribution system. A simulation test is provided to validate the performance and convergence rate of the proposed estimation algorithm.

Design/methodology/approach

Least mean square (LMS) algorithms for frequency estimation encounter problems when voltage contains unbalance, sags and harmonic distortion. The convergence rate of the LMS algorithm is sensitive to the adjustment of the step-size parameter used in the update equation. This paper proposes VRP-NLMS algorithm for frequency estimation in a power system. Regularization parameter is variable in the NLMS algorithm to adjust step-size parameter. Delayed signal cancellation (DSC) operator suppresses harmonics and negative sequence component of the voltage vector in a two-phase Î ± β plane. The DSC part is placed in front of the NLMS algorithm as a pre-filter and a positive sequence of the grid voltage is extracted.

Findings

By adapting of the step-size parameter, speed and accuracy of the LMS algorithm are improved. The DSC operator is augmented to the NLMS algorithm for more improvement of the performance of this adaptive filter. Simulation results validate that the proposed VRP-NLMS algorithm has a less misalignment of performance with more convergence rate.

Originality/value

This paper is a theoretical support to simulated system performance.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 38 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 7 August 2019

Shao Hung Goh

Warehouses are large emitters of greenhouse gases and their impact on climate change is under increasing focus. The purpose of this paper is to investigate the barriers that…

1122

Abstract

Purpose

Warehouses are large emitters of greenhouse gases and their impact on climate change is under increasing focus. The purpose of this paper is to investigate the barriers that inhibit the adoption of low-carbon warehousing in Asia-Pacific and their links to carbon abatement performance.

Design/methodology/approach

An exploratory conceptual model was first developed from a literature review of the general barriers to sustainable supply chain practices and hence potentially in low-carbon warehousing. A large contract logistics services provider in the Asia-Pacific served as the subject of a case study. The perceived barriers to low-carbon warehousing were derived from an internal survey of respondents from the case company and regressed against carbon abatement outcomes at that organization’s operations across the region.

Findings

Results show that the case company reduced carbon emissions by 36 percent on a revenue-normalized basis between 2008 and 2014, but with relatively lower success in emerging markets vs mature markets. An Elastic Net regression analysis confirms that technology and government-related factors are the most important barriers in the case company’s efforts to “decarbonize” its local warehousing operations. However, results suggest that the customer-related barrier, which is highly correlated with the government barrier, is in part driven by the latter.

Research limitations/implications

This case study is based on a single multinational company in Asia-Pacific, but nonetheless serves as an impetus for more cross-sectional studies to form an industry-wide view.

Originality/value

An extended stewardship framework based on the natural resource-based view has been proposed, in which logistics services providers take on a proactive boundary-spanning role to lower the external barriers to low-carbon warehousing.

Details

International Journal of Physical Distribution & Logistics Management, vol. 49 no. 6
Type: Research Article
ISSN: 0960-0035

Keywords

Article
Publication date: 11 January 2023

Ajit Kumar and A.K. Ghosh

The purpose of this study is to estimate aerodynamic parameters using regularized regression-based methods.

Abstract

Purpose

The purpose of this study is to estimate aerodynamic parameters using regularized regression-based methods.

Design/methodology/approach

Regularized regression methods used are LASSO, ridge and elastic net.

Findings

A viable option of aerodynamic parameter estimation from regularized regression-based methods is found.

Practical implications

Efficacy of the methods is examined on flight test data.

Originality/value

This study provides regularized regression-based methods for aerodynamic parameter estimation from the flight test data.

Details

Aircraft Engineering and Aerospace Technology, vol. 95 no. 5
Type: Research Article
ISSN: 1748-8842

Keywords

Book part
Publication date: 18 January 2022

James Mitchell, Aubrey Poon and Gian Luigi Mazzi

This chapter uses an application to explore the utility of Bayesian quantile regression (BQR) methods in producing density nowcasts. Our quantile regression modeling strategy is…

Abstract

This chapter uses an application to explore the utility of Bayesian quantile regression (BQR) methods in producing density nowcasts. Our quantile regression modeling strategy is designed to reflect important nowcasting features, namely the use of mixed-frequency data, the ragged-edge, and large numbers of indicators (big data). An unrestricted mixed data sampling strategy within a BQR is used to accommodate a large mixed-frequency data set when nowcasting; the authors consider various shrinkage priors to avoid parameter proliferation. In an application to euro area GDP growth, using over 100 mixed-frequency indicators, the authors find that the quantile regression approach produces accurate density nowcasts including over recessionary periods when global-local shrinkage priors are used.

Details

Essays in Honor of M. Hashem Pesaran: Prediction and Macro Modeling
Type: Book
ISBN: 978-1-80262-062-7

Keywords

Article
Publication date: 4 November 2014

Ahmad Mozaffari, Nasser Lashgarian Azad and Alireza Fathi

The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning. Generally, by defining a proper penalty…

Abstract

Purpose

The purpose of this paper is to demonstrate the applicability of swarm and evolutionary techniques for regularized machine learning. Generally, by defining a proper penalty function, regularization laws are embedded into the structure of common least square solutions to increase the numerical stability, sparsity, accuracy and robustness of regression weights. Several regularization techniques have been proposed so far which have their own advantages and disadvantages. Several efforts have been made to find fast and accurate deterministic solvers to handle those regularization techniques. However, the proposed numerical and deterministic approaches need certain knowledge of mathematical programming, and also do not guarantee the global optimality of the obtained solution. In this research, the authors propose the use of constraint swarm and evolutionary techniques to cope with demanding requirements of regularized extreme learning machine (ELM).

Design/methodology/approach

To implement the required tools for comparative numerical study, three steps are taken. The considered algorithms contain both classical and swarm and evolutionary approaches. For the classical regularization techniques, Lasso regularization, Tikhonov regularization, cascade Lasso-Tikhonov regularization, and elastic net are considered. For swarm and evolutionary-based regularization, an efficient constraint handling technique known as self-adaptive penalty function constraint handling is considered, and its algorithmic structure is modified so that it can efficiently perform the regularized learning. Several well-known metaheuristics are considered to check the generalization capability of the proposed scheme. To test the efficacy of the proposed constraint evolutionary-based regularization technique, a wide range of regression problems are used. Besides, the proposed framework is applied to a real-life identification problem, i.e. identifying the dominant factors affecting the hydrocarbon emissions of an automotive engine, for further assurance on the performance of the proposed scheme.

Findings

Through extensive numerical study, it is observed that the proposed scheme can be easily used for regularized machine learning. It is indicated that by defining a proper objective function and considering an appropriate penalty function, near global optimum values of regressors can be easily obtained. The results attest the high potentials of swarm and evolutionary techniques for fast, accurate and robust regularized machine learning.

Originality/value

The originality of the research paper lies behind the use of a novel constraint metaheuristic computing scheme which can be used for effective regularized optimally pruned extreme learning machine (OP-ELM). The self-adaption of the proposed method alleviates the user from the knowledge of the underlying system, and also increases the degree of the automation of OP-ELM. Besides, by using different types of metaheuristics, it is demonstrated that the proposed methodology is a general flexible scheme, and can be combined with different types of swarm and evolutionary-based optimization techniques to form a regularized machine learning approach.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Book part
Publication date: 13 May 2017

David Card, David S. Lee, Zhuan Pei and Andrea Weber

A regression kink design (RKD or RK design) can be used to identify casual effects in settings where the regressor of interest is a kinked function of an assignment variable. In…

Abstract

A regression kink design (RKD or RK design) can be used to identify casual effects in settings where the regressor of interest is a kinked function of an assignment variable. In this chapter, we apply an RKD approach to study the effect of unemployment benefits on the duration of joblessness in Austria, and discuss implementation issues that may arise in similar settings, including the use of bandwidth selection algorithms and bias-correction procedures. Although recent developments in nonparametric estimation (Calonico, Cattaneo, & Farrell, 2014; Imbens & Kalyanaraman, 2012) are sometimes interpreted by practitioners as pointing to a default estimation procedure, we show that in any given application different procedures may perform better or worse. In particular, Monte Carlo simulations based on data-generating processes that closely resemble the data from our application show that some asymptotically dominant procedures may actually perform worse than “sub-optimal” alternatives in a given empirical application.

Details

Regression Discontinuity Designs
Type: Book
ISBN: 978-1-78714-390-6

Article
Publication date: 4 November 2014

ShiYang Zhao and Pu Xue

– The purpose of the paper is to improve the calculability of a continuum damage failure model of composite laminates based on Tsai-Wu criteria.

Abstract

Purpose

The purpose of the paper is to improve the calculability of a continuum damage failure model of composite laminates based on Tsai-Wu criteria.

Design/methodology/approach

A technique based on viscous regularization, a characteristic element length and fracture energies of fiber and matrix are used in the model.

Findings

The calculability of the material model is improved. The modified model can predict the behavior of composite structure better.

Originality/value

The convergence problem and the mesh softening problem are main concern in the calculability of numerical model. In order to improve the convergence, a technique based on viscous regularization of damage variable is used. Meanwhile, characteristic element length and fracture energies of fiber and matrix are added into the damage constitutive equation to reduce the mesh sensitivity of numerical results. Finally, a laminated structure with damages is implemented using a User Material Subroutine in ABAQUS/Standard. Mesh sensitivity and value of viscosity are discussed.

Details

Multidiscipline Modeling in Materials and Structures, vol. 10 no. 4
Type: Research Article
ISSN: 1573-6105

Keywords

Open Access
Article
Publication date: 24 October 2021

Piergiorgio Alotto, Paolo Di Barba, Alessandro Formisano, Gabriele Maria Lozito, Raffaele Martone, Maria Evelina Mognaschi, Maurizio Repetto, Alessandro Salvini and Antonio Savini

Inverse problems in electromagnetism, namely, the recovery of sources (currents or charges) or system data from measured effects, are usually ill-posed or, in the numerical…

Abstract

Purpose

Inverse problems in electromagnetism, namely, the recovery of sources (currents or charges) or system data from measured effects, are usually ill-posed or, in the numerical formulation, ill-conditioned and require suitable regularization to provide meaningful results. To test new regularization methods, there is the need of benchmark problems, which numerical properties and solutions should be well known. Hence, this study aims to define a benchmark problem, suitable to test new regularization approaches and solves with different methods.

Design/methodology/approach

To assess reliability and performance of different solving strategies for inverse source problems, a benchmark problem of current synthesis is defined and solved by means of several regularization methods in a comparative way; subsequently, an approach in terms of an artificial neural network (ANN) is considered as a viable alternative to classical regularization schemes. The solution of the underlying forward problem is based on a finite element analysis.

Findings

The paper provides a very detailed analysis of the proposed inverse problem in terms of numerical properties of the lead field matrix. The solutions found by different regularization approaches and an ANN method are provided, showing the performance of the applied methods and the numerical issues of the benchmark problem.

Originality/value

The value of the paper is to provide the numerical characteristics and issues of the proposed benchmark problem in a comprehensive way, by means of a wide variety of regularization methods and an ANN approach.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 40 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 19 August 2021

Hendrik Kohrs, Benjamin Rainer Auer and Frank Schuhmacher

In short-term forecasting of day-ahead electricity prices, incorporating intraday dependencies is vital for accurate predictions. However, it quickly leads to dimensionality…

Abstract

Purpose

In short-term forecasting of day-ahead electricity prices, incorporating intraday dependencies is vital for accurate predictions. However, it quickly leads to dimensionality problems, i.e. ill-defined models with too many parameters, which require an adequate remedy. This study addresses this issue.

Design/methodology/approach

In an application for the German/Austrian market, this study derives variable importance scores from a random forest algorithm, feeds the identified variables into a support vector machine and compares the resulting forecasting technique to other approaches (such as dynamic factor models, penalized regressions or Bayesian shrinkage) that are commonly used to resolve dimensionality problems.

Findings

This study develops full importance profiles stating which hours of which past days have the highest predictive power for specific hours in the future. Using the profile information in the forecasting setup leads to very promising results compared to the alternatives. Furthermore, the importance profiles provide a possible explanation why some forecasting methods are more accurate for certain hours of the day than others. They also help to explain why simple forecast combination schemes tend to outperform the full battery of models considered in the comprehensive comparative study.

Originality/value

With the information contained in the variable importance scores and the results of the extensive model comparison, this study essentially provides guidelines for variable and model selection in future electricity market research.

Article
Publication date: 8 May 2023

Ben Shepherd and Tanaporn Sriklay

The authors extend the World Bank's Logistics Performance Index (LPI) for 30 additional countries and 13 additional years. The authors develop an inexpensive method for extending…

Abstract

Purpose

The authors extend the World Bank's Logistics Performance Index (LPI) for 30 additional countries and 13 additional years. The authors develop an inexpensive method for extending survey data when frequent, universal surveys are unavailable. The authors identify groups of country characteristics that influence LPI scores.

Design/methodology/approach

Using data from the World Development Indicators—the broadest global dataset of country socioeconomic features—the authors test machine learning algorithms for their ability to predict the LPI. The authors examine importance scores to identify factors that influence LPI scores.

Findings

The best performing algorithm produces predictions on unseen data that account for nearly 90% of observed variation, and are accurate to within 6%. It performs twice as well as an OLS model with per capita income as the only predictor. Explanatory factors are business environment, economic structure, finance, environment, human development, and institutional quality.

Practical implications

Machine learning offers a simple, inexpensive way of extending the coverage of survey data. This dataset provides a richer picture of logistics performance around the world. The factors the authors identify as predicting higher LPI scores can help policymakers and practitioners target interventions.

Originality/value

This paper is one of the first applications of machine learning to extend coverage of an index based on an international survey. The authors use the new data to provide the most wide-ranging analysis of logistics performance across countries and over time. The output is an important resource for policymakers tracking performance, and researchers particularly in smaller and lower income countries. The authors also examine a wider range of explanatory factors for LPI scores than previous work.

Details

International Journal of Physical Distribution & Logistics Management, vol. 53 no. 9
Type: Research Article
ISSN: 0960-0035

Keywords

1 – 10 of 906