Search results
1 – 10 of over 13000The purpose of this paper is to present an algorithm of real estate mass appraisal in which the impact of attributes (real estate features) is estimated by inequality restricted…
Abstract
Purpose
The purpose of this paper is to present an algorithm of real estate mass appraisal in which the impact of attributes (real estate features) is estimated by inequality restricted least squares (IRLS) model.
Design/methodology/approach
This paper presents the algorithm of real estate mass appraisal, which was also presented in the form of an econometric model. Vital problem related to econometric models of mass appraisal is multicollinearity. In this paper, a priori knowledge about parameters is used by imposing restrictions in the form of inequalities. IRLS model is therefore used to limit negative consequences of multicollinearity. In ordinary least squares (OLS) models, estimator variances might be inflated by multicollinearity, which could lead to wrong signs of estimates. In IRLS models, estimators efficiency is higher (estimator variances are lower), which could result in better appraisals.
Findings
The final effect of the analysis is a vector of the impact of real estate attributes on their value in the mass appraisal algorithm. After making expert corrections, the algorithm was used to evaluate 318 properties from the test set. Valuation errors were also discussed.
Originality/value
Restrictions in the form of inequalities were imposed on the parameters of the econometric model, ensuring the non-negativity and monotonicity of real estate attribute impact. In case of real estate, variables are usually correlated. OLS estimators are then inflated and inefficient. Imposing restrictions in form of inequalities could improve results because IRLS estimators are more efficient. In the case of results inconsistent with theoretical assumptions, the real estate mass appraisal algorithm enables having the obtained results adjusted by an expert. This can be important for low quality databases, which is often the case in underdeveloped real estate markets. Another reason for expert correction may be the low efficiency of a given real estate market.
Details
Keywords
Cheng‐Hsien Chen, Te‐Hui Tsai, Ding‐Wen Yang, Yuan Kang and Yeon‐Pun Chang
The purpose of this paper is to study the influences of both the number and locations of entry holes on the static and dynamic characteristics of a rigid rotor supported by two…
Abstract
Purpose
The purpose of this paper is to study the influences of both the number and locations of entry holes on the static and dynamic characteristics of a rigid rotor supported by two double‐rows, inherently compensated aerostatic bearings.
Design/methodology/approach
The air is assumed to be perfect gas undergoing the adiabatic process and passing through entry holes into the bearing clearance. Air film in the clearance is governed by Reynolds equation including the coupled effects of wedge due to rotor rotation and squeezed film due to rotor oscillation.
Findings
The method is used to analyze Reynolds equation, which is then solved by the finite difference method and numerical integration to yield static and dynamic characteristics of air film. The equation of motion of the rotor‐bearing system is obtained by using the perturbation method and the eigensolution method is used to determine the stability threshold and critical whirl ratio.
Originality/value
The paper considers the eccentricity, rotor speed, and restriction parameter in the analysis of the whirl instability of the rotor‐aerostatic bearing system for the comparisons between various designs in the number and locations of entry holes of aerostatic bearings.
Details
Keywords
Yuan Kang, Jian‐Lin Lee, Hua‐Chih Huang, Ching‐Yuan Lin, Hsing‐Han Lee, De‐Xing Peng and Ching‐Chu Huang
The paper aims to determine whether the type selection and parameters determination of the compensation are most important for yielding the acceptable or optimized characteristics…
Abstract
Purpose
The paper aims to determine whether the type selection and parameters determination of the compensation are most important for yielding the acceptable or optimized characteristics in design of hydrostatic bearings.
Design/methodology/approach
This paper utilizes the equations of flow equilibrium to determine the film thickness or displacement of worktable with respect to the recess pressure.
Findings
The stiffness due to compensation of constant‐flow pump increases monotonically as recess pressure increases. Also, the paper considers which is larger than that due to orifice compensation and capillary compensation at the same recess pressure ratio.
Originality/value
The findings show that the usage range of recess pressure and compensation parameters can be selected to correspond to the smallest gradient in variations of worktable displacement or film thickness.
Details
Keywords
Ronald Nojosa and Pushpa Narayan Rathie
This paper deals with the estimation of the stress–strength reliability R = P(X < Y), when X and Y follow (1) independent generalized gamma (GG) distributions with only a common…
Abstract
Purpose
This paper deals with the estimation of the stress–strength reliability R = P(X < Y), when X and Y follow (1) independent generalized gamma (GG) distributions with only a common shape parameter and (2) independent Weibull random variables with arbitrary scale and shape parameters and generalize the proposal from Kundu and Gupta (2006), Kundu and Raqab (2009) and Ali et al. (2012).
Design/methodology/approach
First, a closed form expression for R is derived under the conditions (1) and (2). Next, sufficient conditions are given for the convergence of the infinite series expansions used to calculate the value of R in case (2). The models GG and Weibull are fitted by maximum likelihood using Broyden–Fletcher–Goldfarb–Shanno (BFGS) quasi-Newton method. Confidence intervals and standard errors are calculated using bootstrap. For illustration purpose, two real data sets are analyzed and the results are compared with the existing recent results available in the literature.
Findings
The proposed approaches improve the estimation of the R by not using transformations in the data and flexibilize the modeling with Weibull distributions with arbitrary scale and shape parameters.
Originality/value
The proposals of the paper eliminate the misestimation of R caused by subtracting a constant value from the data (Kundu and Raqab, 2009) and treat the estimation of R in a more adequate way by using the Weibull distributions without restrictions in the parameters. The two cases covered generalize a number of distributions and unify a number of stress–strength probability P(X < Y) results available in the literature.
Details
Keywords
Edgardo Sica, Hazar Altınbaş and Gaetano Gabriele Marini
Public debt forecasts represent a key policy issue. Many methodologies have been employed to predict debt sustainability, including dynamic stochastic general equilibrium models…
Abstract
Purpose
Public debt forecasts represent a key policy issue. Many methodologies have been employed to predict debt sustainability, including dynamic stochastic general equilibrium models, the stock flow consistent method, the structural vector autoregressive model and, more recently, the neuro-fuzzy method. Despite their widespread application in the empirical literature, all of these approaches exhibit shortcomings that limit their utility. The present research adopts a different approach to public debt forecasts, that is, the random forest, an ensemble of machine learning.
Design/methodology/approach
Using quarterly observations over the period 2000–2021, the present research tests the reliability of the random forest technique for forecasting the Italian public debt.
Findings
The results show the large predictive power of this method to forecast debt-to-GDP fluctuations, with no need to model the underlying structure of the economy.
Originality/value
Compared to other methodologies, the random forest method has a predictive capacity that is granted by the algorithm itself. The use of repeated learning, training and validation stages provides well-defined parameters that are not conditional to strong theoretical restrictions This allows to overcome the shortcomings arising from the traditional techniques which are generally adopted in the empirical literature to forecast public debt.
Details
Keywords
The pattern of fuel demand in two sectors of the UK economy is modelled by means of translog cost functions. The models specified in this study are designed to capture the effects…
Abstract
The pattern of fuel demand in two sectors of the UK economy is modelled by means of translog cost functions. The models specified in this study are designed to capture the effects of changes in relative fuel prices and assess the impact of technical change on the demand for fuels. Estimates of biases of technical change and price elasticities for the demand for fuels are calculated from an econometric study with the models.
Leonardo Morales‐Arias and Guilherme V. Moura
The purpose of this paper is to propose and test empirically an inflation model containing permanent and transitory heteroskedastic components for the G7 countries. More…
Abstract
Purpose
The purpose of this paper is to propose and test empirically an inflation model containing permanent and transitory heteroskedastic components for the G7 countries. More specifically, recent evidences from the literature are gathered to construct a model with a heteroskedastic global component capturing comovements amongst G7 economies. Moreover, evidence of asymmetric generalized autoregressive conditionally heteroskedastic effects both in the transitory and in the permanent components are taken into account, and the time‐varying variance of each component allows their influence over the observable inflation to change over time. Out‐of‐sample forecasting exercises are used to test the model validity.
Design/methodology/approach
The model is written in state‐space form and estimation is carried out in one step via quasi‐maximum likelihood using the augmented Kalman filter, which allows us to compute smoothed estimates of permanent and of transitory components of inflation rates. Out‐of‐sample forecasts are compared against a random walk (RW) and an autoregressive (AR) model of order one. The significance of the differences in forecast accuracy is tested using the Diebold‐Marino test, the forecast encompassing test, and the Pesaran and Timmermann test.
Findings
The proposed model fits the data quite well and has good forecasting capabilities when compared to RW and to AR models of order one. The volatility of the global inflation trend extracted from the model captures the international effects of the “Great Moderation” and of the “Great Recession”. An increase in correlation of inflation for certain country pairs since the start of the “Great Recession” is observed. Moreover, there is evidence of asymmetry in inflation volatility, which is consistent with the idea that higher inflation levels lead to greater uncertainty about future inflation.
Originality/value
This article introduces a new global inflation model with permanent and transitory heteroskedastic components incorporating many recent findings of the literature, and proposes a one step estimation procedure for it. The model fits very well the data and produces good out‐of‐sample forecasts.
Details
Keywords
In this paper, the authors aim to investigate the short‐run as well as long‐run market efficiency of Indian commodity futures markets using different asset pricing models. Four…
Abstract
Purpose
In this paper, the authors aim to investigate the short‐run as well as long‐run market efficiency of Indian commodity futures markets using different asset pricing models. Four agricultural (soybean, corn, castor seed and guar seed) and seven non‐agricultural (gold, silver, aluminium, copper, zinc, crude oil and natural gas) commodities have been tested for market efficiency and unbiasedness.
Design/methodology/approach
The long‐run market efficiency and unbiasedness is tested using Johansen cointegration procedure while allowing for constant risk premium. Short‐run price dynamics is investigated with constant and time varying risk premium. Short‐run price dynamics with constant risk premium is modeled with ECM model and short‐run price dynamics with time varying risk premium is modeled using ECM‐GARCH in‐Mean framework.
Findings
As far as long‐run efficiency is concerned, the authors find that near month futures prices of most of the commodities are cointegrated with the spot prices. The cointegration relationship is not found for the next to near months futures contracts, where futures trading volume is low. The authors find support for the hypothesis that thinly traded contracts fail to forecast future spot prices and are inefficient. The unbiasedness hypothesis is rejected for most of the commodities. It is also found that for all commodities, some inefficiency exists in the short run. The authors do not find support of time varying risk premium in Indian commodity market context.
Originality/value
In context of Indian commodity futures markets, probably this is the first study which explores the short‐run market efficiency of futures markets in time varying risk premium framework. This paper also links trading activity of Indian commodity futures markets with market efficiency.
Details
Keywords
Qasim Zaheer, Mir Majaid Manzoor and Muhammad Jawad Ahamad
The purpose of this article is to analyze the optimization process in depth, elaborating on the components of the entire process and the techniques used. Researchers have been…
Abstract
Purpose
The purpose of this article is to analyze the optimization process in depth, elaborating on the components of the entire process and the techniques used. Researchers have been drawn to the expanding trend of optimization since the turn of the century. The rate of research can be used to measure the progress and increase of this optimization procedure. This study is phenomenal to understand the optimization process and different algorithms in addition to their application by keeping in mind the current computational power that has increased the implementation for several engineering applications.
Design/methodology/approach
Two-dimensional analysis has been carried out for the optimization process and its approaches to addressing optimization problems, i.e. computational power has increased the implementation. The first section focuses on a thorough examination of the optimization process, its objectives and the development of processes. Second, techniques of the optimization process have been evaluated, as well as some new ones that have emerged to overcome the above-mentioned problems.
Findings
This paper provided detailed knowledge of optimization, several approaches and their applications in civil engineering, i.e. structural, geotechnical, hydraulic, transportation and many more. This research provided tremendous emerging techniques, where the lack of exploratory studies is to be approached soon.
Originality/value
Optimization processes have been studied for a very long time, in engineering, but the current computational power has increased the implementation for several engineering applications. Besides that, different techniques and their prediction modes often require high computational strength, such parameters can be mitigated with the use of different techniques to reduce computational cost and increase accuracy.
Details
Keywords
Extends Mäler′s notion of weak complementarity between aprivate good and a public good to non‐homothetic demand functions whichcan be exactly aggregated. Aggregate demand…
Abstract
Extends Mäler′s notion of weak complementarity between a private good and a public good to non‐homothetic demand functions which can be exactly aggregated. Aggregate demand functions depending on private prices, public good quantities and income distribution statistics can then be used to recover the private individual demand functions which reveal an individual′s willingness to pay for public goods.
Details