Search results

1 – 10 of 729
Book part
Publication date: 23 November 2011

Tiemen Woutersen

Observations in a dataset are rarely missing at random. One can control for this non-random selection of the data by introducing fixed effects or other nuisance parameters. This…

Abstract

Observations in a dataset are rarely missing at random. One can control for this non-random selection of the data by introducing fixed effects or other nuisance parameters. This chapter deals with consistent estimation the presence of many nuisance parameters. It derives a new orthogonality concept that gives sufficient conditions for consistent estimation of the parameters of interest. It also shows how this orthogonality concept can be used to derive and compare estimators. The chapter then shows how to use the orthogonality concept to derive estimators for unbalanced panels and incomplete data sets (missing data).

Details

Missing Data Methods: Cross-sectional Methods and Applications
Type: Book
ISBN: 978-1-78052-525-9

Keywords

Book part
Publication date: 12 December 2003

Tiemen Woutersen

One way to control for the heterogeneity in panel data is to allow for time-invariant, individual specific parameters. This fixed effect approach introduces many parameters into…

Abstract

One way to control for the heterogeneity in panel data is to allow for time-invariant, individual specific parameters. This fixed effect approach introduces many parameters into the model which causes the “incidental parameter problem”: the maximum likelihood estimator is in general inconsistent. Woutersen (2001) shows how to approximately separate the parameters of interest from the fixed effects using a reparametrization. He then shows how a Bayesian method gives a general solution to the incidental parameter for correctly specified models. This paper extends Woutersen (2001) to misspecified models. Following White (1982), we assume that the expectation of the score of the integrated likelihood is zero at the true values of the parameters. We then derive the conditions under which a Bayesian estimator converges at rate N where N is the number of individuals. Under these conditions, we show that the variance-covariance matrix of the Bayesian estimator has the form of White (1982). We illustrate our approach by the dynamic linear model with fixed effects and a duration model with fixed effects.

Details

Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Type: Book
ISBN: 978-1-84950-253-5

Article
Publication date: 1 December 1995

E. Douglas Beach, Jorge Fernandez‐Cornej and Noel D. Uri

Survey data on expected and actual prices received by individualvegetable growers in Florida, Michigan and Texas in 1990 are used totest the rational expectations hypothesis. The…

1439

Abstract

Survey data on expected and actual prices received by individual vegetable growers in Florida, Michigan and Texas in 1990 are used to test the rational expectations hypothesis. The use of individual grower data overcomes many of the issues that have limited previous tests of this hypothesis in agriculture. Overall, finds that price expectations of vegetable growers are inconsistent with the rational expectations hypothesis for the majority of vegetable/state combinations studied.

Details

Journal of Economic Studies, vol. 22 no. 6
Type: Research Article
ISSN: 0144-3585

Keywords

Article
Publication date: 24 January 2022

Shobha Y.K. and Rangaraju H.G.

In order to optimize BER and to substantiate performance measures, initially, the filter bank multicarrier (FBMC) quadrature amplitude modulation (QAM) performance metrics are…

Abstract

Purpose

In order to optimize BER and to substantiate performance measures, initially, the filter bank multicarrier (FBMC) quadrature amplitude modulation (QAM) performance metrics are evaluated with the cyclic prefix-orthogonal frequency division multiplexing (CP-OFDM) system. The efficiency of CP-OFDM, as well as FBMC/QAM that is transmitting over specific fading channels, is evaluated in terms of quality trade-off metrics over bit error rate (BER) as well as modulation order. When compared with the traditional FBMC systems, the proposed FBMC QAM system shows better performance. The performance metrics of FBMC/QAM with the inclusion of multiuser multiple-input-multiple-output (MUMIMO) is validated with worst case channel environment. The performance penalty gap that exists in CP- OFDM is compared with improved FBMC QAM in terms of both BER and OOB radiation measures. The BER trade off comparison between ML and MMSE optimally determine the prominent signal detection model for high performance FBMC QAM system.

Design/methodology/approach

The main objective of this research work is to provide perceptions about performance, co-channel interference avoidance as well as about the techniques that are used for minimizing the complexity of the system that is related to FBMC QAM structure for reducing intrinsic interference with higher spectral features as well as maximal likelihood (ML) detector systems.

Findings

This research work also looks at the efficiency of multiuser multiple-input-multiple-output (MU-MIMO) FBMC/QAM over nonlinear channels. Furthermore, when compared with OFDM, it also significantly reduces the penalty gap efficiency, thereby enabling the accessibility of the proposed FBMC QAM system from BER as well as implementation point of view. Finally, the signal detection is facilitated by the sub-detector and is achieved on the downlink side by making use of threshold-driven statistical measures that accurately minimize the complexity trade-off measures of the ML detector over modulation order. The computation of the proposed FBMC method’s BER performance measures was carried out through MATLAB simulation environments, as well as efficiency of the suggested work was demonstrated through detailed analyses.

Originality/value

This research work intend to combine the efficient MU-MIMO based transmission scheme with optimal FBMC/QAM for improved QoS over highly nonlinear channels which includes both delay spread and Doppler effects. And optimal signal detection model is facilitated at the downlink side by making use of threshold-driven statistical measures that accurately minimize the complexity trade-off measures of the ML detector over modulation order. The computation of the proposed FBMC method’s BER performance measures was carried out through MATLAB simulation environments, as well as efficiency of the suggested work was demonstrated through detailed analyses.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 8 October 2018

Yanbiao Zou and Xiangzhi Chen

This paper aims to propose a hand–eye calibration method of arc welding robot and laser vision sensor by using semidefinite programming (SDP).

Abstract

Purpose

This paper aims to propose a hand–eye calibration method of arc welding robot and laser vision sensor by using semidefinite programming (SDP).

Design/methodology/approach

The conversion relationship between the pixel coordinate system and laser plane coordinate system is established on the basis of the mathematical model of three-dimensional measurement of laser vision sensor. In addition, the conversion relationship between the arc welding robot coordinate system and the laser vision sensor measurement coordinate system is also established on the basis of the hand–eye calibration model. The ordinary least square (OLS) is used to calculate the rotation matrix, and the SDP is used to identify the direction vectors of the rotation matrix to ensure their orthogonality.

Findings

The feasibility identification can reduce the calibration error, and ensure the orthogonality of the calibration results. More accurate calibration results can be obtained by combining OLS + SDP.

Originality/value

A set of advanced calibration methods is systematically established, which includes parameters calibration of laser vision sensor and hand–eye calibration of robots and sensors. For the hand–eye calibration, the physics feasibility problem of rotating matrix is creatively put forward, and is solved through SDP algorithm. High-precision calibration results provide a good foundation for future research on seam tracking.

Details

Industrial Robot: An International Journal, vol. 45 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 February 2017

Ghasem Sadeghi Bajestani, Mohammad Reza Hashemi Golpayegani, Ali Sheikhani and Farah Ashrafzadeh

This paper aims to explain, first of all, signal modeling steps using Poincaré, and then considering the occurred events, concept of information applying Poincaré section and…

Abstract

Purpose

This paper aims to explain, first of all, signal modeling steps using Poincaré, and then considering the occurred events, concept of information applying Poincaré section and information approach, the brain pattern variations in autism spectrum disorder (ASD) cases will be diagnosed. A kind of representation of electroencephalogram (EEG) signal, namely, complementary plot, in which the main characteristic is special attention to asymmetry and symmetry coexist in natural and human processes, is introduced. In this paper, a new model is provided whose variations of patterns are similar to EEG’s when the transformation parameter is changed. A significant difference between ASD and healthy cases was also observed, which could be used to distinguish between various types of systems.

Design/methodology/approach

Complementary plot method is one of the most proper representations for Poincaré section of complex dynamics, because, as it was said about its characteristics, it has a qualitative approach toward signal (Sabelli, 2000, 2001, 2003, 2008, 2005, Sabelli et al., 2011). Considering the special conditions of this representation, here, intersection with a circle y2 + x2 = r2 will be used; the important fact is, on the contrary to previous representations in which circular section had energy concept, here circular section considers phases. For finding trajectory intersection points, after calculating the sin and cosine of each term of EEG, plotting them in XY plane and drawing a chord between successive points of presentation transitions, then its intersections with the assumed circle are determined. But considering the sampling frequency, chords and Poincaré section, in this space, a minimum error – as the threshold – should be assumed in the program.

Findings

Natural and human processes are biotic (life-like) and creative (Sabelli and Galilei), and studying coexisting opposites by calculating the sine and cosine of each term in heartbeat intervals, weather variables and integer biotic series or random walk reveals an astonishingly regular mandala pattern; these patterns are not generated by random, periodic or chaotic series (Sabelli, 2005). This paper shows that in EEG of ASD children, mandala-like patterns of concentric rings are emergent in all situations (baseline – watching animation with voice and without voice) and electrode site (C3 and C4), but not in healthy individuals. The authors take the relation between sine and cosine functions as a mathematical model for complementary opposition, because it involves reciprocity and orthogonality sine and cosine are natural models for information. In fact, trigonometric analyses of empirical data to be described in this paper suggest expanding the concept of co-creative opposition to include uncorrelated opposites and partial opposites, i.e. partial agonists and partial antagonists that are neither linear nor orthogonal. Using Poincaré sections, it is shown that the difference in information and creativity of the data is the distinctive characteristic in ASD and healthy cases. Creation is the generation of novelty, diversity and complexity in complex systems.

Originality/value

This paper is an original paper based on cybernetic approaches for studying the variations of ASD children.

Details

Kybernetes, vol. 46 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 7 June 2018

Jörg Döpke and Lars Tegtmeier

The purpose of this paper is, to study macroeconomic risk factors driving the expected stock returns of listed private equity (LPE). The authors use LPE indices divided into…

Abstract

Purpose

The purpose of this paper is, to study macroeconomic risk factors driving the expected stock returns of listed private equity (LPE). The authors use LPE indices divided into different styles and regions from January 2004 to December 2016 and a set of country stock indices to estimate the macroeconomic risk profiles and corresponding risk premiums. Using a seemingly unrelated regressions (SUR) model to estimate factor sensitivities, the authors document that LPE indices exhibit stock market βs that are greater than 1. A one-factor asset pricing model using world stock market returns as the only possible risk factor is rejected on the basis of generalized method of moments (GMM) orthogonality conditions. In contrast, using the change in a currency basket, the G-7 industrial production, the G-7 term spread, the G-7 inflation rate and a recently proposed indicator of economic policy uncertainty as additional risk factors, this multifactor model is able to price a cross-section of expected LPE returns. The risk-return profile of LPE differs from country equity indices. Consequently, LPE should be treated as a separate asset class.

Design/methodology/approach

Following Ferson and Harvey (1994), the authors use an unconditional asset pricing model to capture the structure of returns across LPE. The authors use 11 LPE indices divided into different styles and regions from January 2004 to December 2016, and a set of country stock indices as spanning assets to estimate the macroeconomic risk profiles and corresponding risk premiums.

Findings

Using a seemingly unrelated regressions (SUR) model to estimate factor sensitivities, the authors document that LPE indices exhibit stock market ßs that are greater than 1. The authors estimate a one-factor asset pricing model using world stock market returns as the only possible risk factor by GMM. This model is rejected on the basis of the GMM orthogonality conditions. By contrast, a multifactor model built on the change in a currency basket, the G-7 industrial production, the G-7 term spread, the G-7 inflation rate and a recently proposed indicator of global economic policy uncertainty as additional risk factors is able to price a cross-section of expected LPE returns.

Research limitations/implications

Given data availability, the authors’ sample is strongly influenced by the financial crisis and its aftermath.

Practical implications

Information about the risk profile of LPE is important for asset allocation decisions. In particular, it may help to optimally react to contemporaneous changes in economy-wide risk factors.

Originality/value

To the best of authors’ knowledge, this is the first LPE study which investigates whether a set of macroeconomic factors is actually priced and, therefore, associated with a non-zero risk premium in the cross-section of returns.

Details

Studies in Economics and Finance, vol. 35 no. 2
Type: Research Article
ISSN: 1086-7376

Keywords

Article
Publication date: 22 November 2023

Hamid Baghestani and Bassam M. AbuAl-Foul

This study evaluates the Federal Reserve (Fed) initial and final forecasts of the unemployment rate for 1983Q1-2018Q4. The Fed initial forecasts in a typical quarter are made in…

Abstract

Purpose

This study evaluates the Federal Reserve (Fed) initial and final forecasts of the unemployment rate for 1983Q1-2018Q4. The Fed initial forecasts in a typical quarter are made in the first month (or immediately after), and the final forecasts are made in the third month of the quarter. The analysis also includes the private forecasts, which are made close to the end of the second month of the quarter.

Design/methodology/approach

In evaluating the multi-period forecasts, the study tests for systematic bias, directional accuracy, symmetric loss, equal forecast accuracy, encompassing and orthogonality. For every test equation, it employs the Newey–West procedure in order to obtain the standard errors corrected for both heteroscedasticity and inherent serial correlation.

Findings

Both Fed and private forecasts beat the naïve benchmark and predict directional change under symmetric loss. Fed final forecasts are more accurate than initial forecasts, meaning that predictive accuracy improves as more information becomes available. The private and Fed final forecasts contain distinct predictive information, but the latter produces significantly lower mean squared errors. The results are mixed when the study compares the private with the Fed initial forecasts. Additional results indicate that Fed (private) forecast errors are (are not) orthogonal to changes in consumer expectations about future unemployment. As such, consumer expectations can potentially help improve the accuracy of private forecasts.

Originality/value

Unlike many other studies, this study focuses on the unemployment rate, since it is an important indicator of the social cost of business cycles, and thus its forecasts are of special interest to policymakers, politicians and social scientists. Accurate unemployment rate forecasts, in particular, are essential for policymakers to design an optimal macroeconomic policy.

Details

Journal of Economic Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0144-3585

Keywords

Book part
Publication date: 6 January 2016

Alessandro Giovannelli and Tommaso Proietti

We address the problem of selecting the common factors that are relevant for forecasting macroeconomic variables. In economic forecasting using diffusion indexes, the factors are…

Abstract

We address the problem of selecting the common factors that are relevant for forecasting macroeconomic variables. In economic forecasting using diffusion indexes, the factors are ordered, according to their importance, in terms of relative variability, and are the same for each variable to predict, that is, the process of selecting the factors is not supervised by the predictand. We propose a simple and operational supervised method, based on selecting the factors on the basis of their significance in the regression of the predictand on the predictors. Given a potentially large number of predictors, we consider linear transformations obtained by principal components analysis. The orthogonality of the components implies that the standard t-statistics for the inclusion of a particular component are independent, and thus applying a selection procedure that takes into account the multiplicity of the hypotheses tests is both correct and computationally feasible. We focus on three main multiple testing procedures: Holm's sequential method, controlling the familywise error rate, the Benjamini–Hochberg method, controlling the false discovery rate, and a procedure for incorporating prior information on the ordering of the components, based on weighting the p-values according to the eigenvalues associated to the components. We compare the empirical performances of these methods with the classical diffusion index (DI) approach proposed by Stock and Watson, conducting a pseudo-real-time forecasting exercise, assessing the predictions of eight macroeconomic variables using factors extracted from an U.S. dataset consisting of 121 quarterly time series. The overall conclusion is that nature is tricky, but essentially benign: the information that is relevant for prediction is effectively condensed by the first few factors. However, variable selection, leading to exclude some of the low-order principal components, can lead to a sizable improvement in forecasting in specific cases. Only in one instance, real personal income, we were able to detect a significant contribution from high-order components.

Details

Dynamic Factor Models
Type: Book
ISBN: 978-1-78560-353-2

Keywords

Article
Publication date: 17 April 2020

Annalisa Ferrando, Ioannis Ganoulis and Carsten Preuss

This paper explores how firms formed their expectations about the availability of bank finance since the financial crisis. Various expectations hypotheses that incorporate…

Abstract

Purpose

This paper explores how firms formed their expectations about the availability of bank finance since the financial crisis. Various expectations hypotheses that incorporate backward and/or forward-looking elements and inattention are tested. From a policy perspective, the most important hypothesis is whether policy announcements have a direct impact on the expectations of companies.

Design/methodology/approach

The analysis is based on a large sample of euro area companies from the ECB “Survey on the Access to Finance of Enterprises” between 2009 and 2018. Ordered logit models are used to relate individual replies on expectations to firms' information available at the time of the forecasts. The model controls for the business cycle and firms' structural characteristics. Using a difference-in-differences approach, we test how policy announcements may affect expectations.

Findings

Firms update what otherwise look like adaptive expectations on the basis of new information. The hypothesis of rational expectations is rejected. Moreover, we do not find evidence of inattention or of a wave of pessimism/optimism. The analysis of expectations around the time of the ECB Outright Monetary Transactions program provides some evidence of forward-looking expectations.

Originality/value

The paper contributes to the literature on expectations by using a novel survey in eleven countries. In the multi-country setting, country-specific business cycle effects and waves of pessimism or optimism are better controlled for. The policy announcements of summer 2012 provide for a natural experiment to test the direct impact of such announcements on expectations, an issue of relevance for the monetary policy transmission to economic activity.

Details

Review of Behavioral Finance, vol. 13 no. 4
Type: Research Article
ISSN: 1940-5979

Keywords

1 – 10 of 729