Search results
1 – 10 of 77Vaseem Akram and Rohan Mukherjee
The main purpose of this paper is to examine the convergence hypothesis of House Price Index (HPI) in the case of 18 major Indian cities for the period 2014–2019.
Abstract
Purpose
The main purpose of this paper is to examine the convergence hypothesis of House Price Index (HPI) in the case of 18 major Indian cities for the period 2014–2019.
Design/methodology/approach
To attain the authors main goal, this study applies a clustering algorithm advanced by Phillips and Sul. This test creates a club of convergence based on the growth of the cities in terms of HPI.
Findings
The study findings show the existence of two convergence clubs and one non-convergent group. Club 1 includes the cities with high HPI growth, whereas club 2 comprises of cities with least HPI growth. Cities belonging to the non-convergent group are neither converging nor diverging.
Practical implications
This study findings will benefit home buyers, sellers, investors, regulators and policymakers interested in the dynamic interlinkages of house price (HP) among Indian cities.
Originality/value
The majority of the studies are conducted in the case of China at the province or city levels. Furthermore, in the case of India, none of the studies has investigated the HP club convergence across Indian cities. Therefore, the present study fills this research gap by examining the HP club convergence across Indian cities.
Details
Keywords
Lim Thye Goh, Irwan Trinugroho, Siong Hook Law and Dedi Rusdi
The objective of this paper is to investigate the impact of institutional quality, foreign direct investment (FDI) inflows and human capital development on Indonesia’s poverty…
Abstract
Purpose
The objective of this paper is to investigate the impact of institutional quality, foreign direct investment (FDI) inflows and human capital development on Indonesia’s poverty rate.
Design/methodology/approach
The quantile regression on data ranging from 1984 to 2019 was used to capture the relationship between the impact of the independent variables (FDI inflows, institutional quality and human capital development) on Indonesia’s poverty rate at different quantiles of the conditional distribution.
Findings
The empirical results reveal that low-quantile institutional quality is detrimental to poverty eradication, whereas FDI inflows and human capital development are significant at higher quantiles of distribution. This implies that higher-value FDI and advanced human capital development are critical to lifting Indonesians out of poverty.
Practical implications
Policymakers should prioritise strategies that advance human capital development, create an enticing investment climate that attracts high-value investments and improve institutional quality levels.
Originality/value
This study contributes to the existing literature because, compared to previous studies that focussed on estimating the conditional mean of the explanatory variable on the poverty rate. It rather provides a more comprehensive understanding of the quantiles of interest of FDI inflows and institutional quality on the Indonesian poverty rate, allowing for more targeted policies.
Peer review
The peer review history for this article is available at: https://publons.com/publon/10.1108/IJSE-09-2023-0733
Details
Keywords
María María Ibañez Martín, Mara Leticia Rojas and Carlos Dabús
Most empirical papers on threshold effects between debt and growth focus on developed countries or a mix of developing and developed economies, often using public debt. Evidence…
Abstract
Purpose
Most empirical papers on threshold effects between debt and growth focus on developed countries or a mix of developing and developed economies, often using public debt. Evidence for developing economies is inconclusive, as is the analysis of other threshold effects such as those probably caused by the level of relative development or the repayment capacity. The objective of this study was to examine threshold effects for developing economies, including external and total debt, and identify them in the debt-growth relation considering three determinants: debt itself, initial real Gross Domestic Product (GDP) per capita and debt to exports ratio.
Design/methodology/approach
We used a panel threshold regression model (PTRM) and a dynamic panel threshold model (DPTM) for a sample of 47 developing countries from 1970 to 2019.
Findings
We found (1) no evidence of threshold effects applying total debt as a threshold variable; (2) one critical value for external debt of 42.32% (using PTRM) and 67.11% (using DPTM), above which this factor is detrimental to growth; (3) two turning points for initial GDP as a threshold variable, where total and external debt positively affects growth at a very low initial GDP, it becomes nonsignificant between critical values, and it negatively influences growth above the second threshold; (4) one critical value for external debt to exports using PTRM and DPTM, below which external debt positively affects growth and negatively above it.
Originality/value
The outcome suggests that only poorer economies can leverage credits. The level of the threshold for the debt to exports ratio is higher than that found in previous literature, implying that the external restriction could be less relevant in recent periods. However, the threshold for the external debt-to-GDP ratio is lower compared to previous evidence.
Details
Keywords
Ibrahim Karatas and Abdulkadir Budak
The study is aimed to compare the prediction success of basic machine learning and ensemble machine learning models and accordingly create novel prediction models by combining…
Abstract
Purpose
The study is aimed to compare the prediction success of basic machine learning and ensemble machine learning models and accordingly create novel prediction models by combining machine learning models to increase the prediction success in construction labor productivity prediction models.
Design/methodology/approach
Categorical and numerical data used in prediction models in many studies in the literature for the prediction of construction labor productivity were made ready for analysis by preprocessing. The Python programming language was used to develop machine learning models. As a result of many variation trials, the models were combined and the proposed novel voting and stacking meta-ensemble machine learning models were constituted. Finally, the models were compared to Target and Taylor diagram.
Findings
Meta-ensemble models have been developed for labor productivity prediction by combining machine learning models. Voting ensemble by combining et, gbm, xgboost, lightgbm, catboost and mlp models and stacking ensemble by combining et, gbm, xgboost, catboost and mlp models were created and finally the Et model as meta-learner was selected. Considering the prediction success, it has been determined that the voting and stacking meta-ensemble algorithms have higher prediction success than other machine learning algorithms. Model evaluation metrics, namely MAE, MSE, RMSE and R2, were selected to measure the prediction success. For the voting meta-ensemble algorithm, the values of the model evaluation metrics MAE, MSE, RMSE and R2 are 0.0499, 0.0045, 0.0671 and 0.7886, respectively. For the stacking meta-ensemble algorithm, the values of the model evaluation metrics MAE, MSE, RMSE and R2 are 0.0469, 0.0043, 0.0658 and 0.7967, respectively.
Research limitations/implications
The study shows the comparison between machine learning algorithms and created novel meta-ensemble machine learning algorithms to predict the labor productivity of construction formwork activity. The practitioners and project planners can use this model as reliable and accurate tool for predicting the labor productivity of construction formwork activity prior to construction planning.
Originality/value
The study provides insight into the application of ensemble machine learning algorithms in predicting construction labor productivity. Additionally, novel meta-ensemble algorithms have been used and proposed. Therefore, it is hoped that predicting the labor productivity of construction formwork activity with high accuracy will make a great contribution to construction project management.
Details
Keywords
Taining Wang and Daniel J. Henderson
A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production…
Abstract
A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.
Details
Keywords
Feng Yao, Qinling Lu, Yiguo Sun and Junsen Zhang
The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the…
Abstract
The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the varying coefficients by a series method. We then use the pilot estimates to perform a one-step backfitting through local linear kernel smoothing, which is shown to be oracle efficient in the sense of being asymptotically equivalent to the estimate knowing the other components of the varying coefficients. In both steps, the authors remove the fixed effects through properly constructed weights. The authors obtain the asymptotic properties of both the pilot and efficient estimators. The Monte Carlo simulations show that the proposed estimator performs well. The authors illustrate their applicability by estimating a varying coefficient production frontier using a panel data, without assuming distributions of the efficiency and error terms.
Details
Keywords
Jialiang Xie, Shanli Zhang, Honghui Wang and Mingzhi Chen
With the rapid development of Internet technology, cybersecurity threats such as security loopholes, data leaks, network fraud, and ransomware have become increasingly prominent…
Abstract
Purpose
With the rapid development of Internet technology, cybersecurity threats such as security loopholes, data leaks, network fraud, and ransomware have become increasingly prominent, and organized and purposeful cyberattacks have increased, posing more challenges to cybersecurity protection. Therefore, reliable network risk assessment methods and effective network security protection schemes are urgently needed.
Design/methodology/approach
Based on the dynamic behavior patterns of attackers and defenders, a Bayesian network attack graph is constructed, and a multitarget risk dynamic assessment model is proposed based on network availability, network utilization impact and vulnerability attack possibility. Then, the self-organizing multiobjective evolutionary algorithm based on grey wolf optimization is proposed. And the authors use this algorithm to solve the multiobjective risk assessment model, and a variety of different attack strategies are obtained.
Findings
The experimental results demonstrate that the method yields 29 distinct attack strategies, and then attacker's preferences can be obtained according to these attack strategies. Furthermore, the method efficiently addresses the security assessment problem involving multiple decision variables, thereby providing constructive guidance for the construction of security network, security reinforcement and active defense.
Originality/value
A method for network risk assessment methods is given. And this study proposed a multiobjective risk dynamic assessment model based on network availability, network utilization impact and the possibility of vulnerability attacks. The example demonstrates the effectiveness of the method in addressing network security risks.
Details
Keywords
The standard method to estimate a stochastic frontier (SF) model is the maximum likelihood (ML) approach with the distribution assumptions of a symmetric two-sided stochastic…
Abstract
The standard method to estimate a stochastic frontier (SF) model is the maximum likelihood (ML) approach with the distribution assumptions of a symmetric two-sided stochastic error v and a one-sided inefficiency random component u. When v or u has a nonstandard distribution, such as v follows a generalized t distribution or u has a
Details
Keywords
Chon Van Le and Uyen Hoang Pham
This paper aims mainly at introducing applied statisticians and econometricians to the current research methodology with non-Euclidean data sets. Specifically, it provides the…
Abstract
Purpose
This paper aims mainly at introducing applied statisticians and econometricians to the current research methodology with non-Euclidean data sets. Specifically, it provides the basis and rationale for statistics in Wasserstein space, where the metric on probability measures is taken as a Wasserstein metric arising from optimal transport theory.
Design/methodology/approach
The authors spell out the basis and rationale for using Wasserstein metrics on the data space of (random) probability measures.
Findings
In elaborating the new statistical analysis of non-Euclidean data sets, the paper illustrates the generalization of traditional aspects of statistical inference following Frechet's program.
Originality/value
Besides the elaboration of research methodology for a new data analysis, the paper discusses the applications of Wasserstein metrics to the robustness of financial risk measures.
Details
Keywords
Ziwen Gao, Steven F. Lehrer, Tian Xie and Xinyu Zhang
Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and…
Abstract
Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and heteroskedasticity of unknown form. The theoretical investigation establishes the asymptotic optimality of the proposed heteroskedastic model averaging heterogeneous autoregressive (H-MAHAR) estimator under mild conditions. The authors additionally examine the convergence rate of the estimated weights of the proposed H-MAHAR estimator. This analysis sheds new light on the asymptotic properties of the least squares model averaging estimator under alternative complicated data generating processes (DGPs). To examine the performance of the H-MAHAR estimator, the authors conduct an out-of-sample forecasting application involving 22 different cryptocurrency assets. The results emphasize the importance of accounting for both model uncertainty and heteroskedasticity in practice.
Details