Search results

1 – 10 of 165
Open Access
Article
Publication date: 27 August 2020

Dieter Koemle and Xiaohua Yu

This paper reviews the current literature on theoretical and methodological issues in discrete choice experiments, which have been widely used in non-market value analysis, such…

9261

Abstract

Purpose

This paper reviews the current literature on theoretical and methodological issues in discrete choice experiments, which have been widely used in non-market value analysis, such as elicitation of residents' attitudes toward recreation or biodiversity conservation of forests.

Design/methodology/approach

We review the literature, and attribute the possible biases in choice experiments to theoretical and empirical aspects. Particularly, we introduce regret minimization as an alternative to random utility theory and sheds light on incentive compatibility, status quo, attributes non-attendance, cognitive load, experimental design, survey methods, estimation strategies and other issues.

Findings

The practitioners should pay attention to many issues when carrying out choice experiments in order to avoid possible biases. Many alternatives in theoretical foundations, experimental designs, estimation strategies and even explanations should be taken into account in practice in order to obtain robust results.

Originality/value

The paper summarizes the recent developments in methodological and empirical issues of choice experiments and points out the pitfalls and future directions both theoretically and empirically.

Details

Forestry Economics Review, vol. 2 no. 1
Type: Research Article
ISSN: 2631-3030

Keywords

Open Access
Article
Publication date: 20 October 2023

Thembeka Sibahle Ngcobo, Lindokuhle Talent Zungu and Nomusa Yolanda Nkomo

This study aims to test the dynamic impact of public debt and economic growth on newly democratized African countries (South Africa and Namibia) and compare the findings with…

Abstract

Purpose

This study aims to test the dynamic impact of public debt and economic growth on newly democratized African countries (South Africa and Namibia) and compare the findings with those of newly democratized European countries (Germany and Ukraine) during the period 1990–2022.

Design/methodology/approach

The methodology involves three stages: identifying the appropriate transition variable, assessing the linearity between public debt and economic growth and selecting the order m of the transition function. The linearity test helps identify the nature of relationships between public debt and economic growth. The wild cluster bootstrap-Lagrange Multiplier test is used to evaluate the model’s appropriateness. All these tests would be executed using the Lagrange Multiplier type of test.

Findings

The results signify the policy switch, as the authors find that the relationship between public debt and economic growth is characterized by two transitions that symbolize that the current stage of the relationship is beyond the U-shape; however, an S-shape. The results show that for newly democratized African countries, the threshold during the first waves was 50% of GDP, represented by a U-shape, which then transits to an inverted U-shape with a threshold of 65% of GDP. Then, for the European case, it was 60% of GDP, which is now 72% of GDP.

Originality/value

The findings suggest that an escalating level of public debt has a negative impact on economic growth; therefore, it is important to implement fiscal discipline, prioritize government spending and reduce reliance on debt financing. This can be achieved by focusing on revenue generation, implementing effective taxation policies, reducing wasteful expenditures and promoting investment and productivity-enhancing measures.

Details

International Journal of Development Issues, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1446-8956

Keywords

Open Access
Article
Publication date: 21 December 2021

Vahid Badeli, Sascha Ranftl, Gian Marco Melito, Alice Reinbacher-Köstinger, Wolfgang Von Der Linden, Katrin Ellermann and Oszkar Biro

This paper aims to introduce a non-invasive and convenient method to detect a life-threatening disease called aortic dissection. A Bayesian inference based on enhanced…

Abstract

Purpose

This paper aims to introduce a non-invasive and convenient method to detect a life-threatening disease called aortic dissection. A Bayesian inference based on enhanced multi-sensors impedance cardiography (ICG) method has been applied to classify signals from healthy and sick patients.

Design/methodology/approach

A 3D numerical model consisting of simplified organ geometries is used to simulate the electrical impedance changes in the ICG-relevant domain of the human torso. The Bayesian probability theory is used for detecting an aortic dissection, which provides information about the probabilities for both cases, a dissected and a healthy aorta. Thus, the reliability and the uncertainty of the disease identification are found by this method and may indicate further diagnostic clarification.

Findings

The Bayesian classification shows that the enhanced multi-sensors ICG is more reliable in detecting aortic dissection than conventional ICG. Bayesian probability theory allows a rigorous quantification of all uncertainties to draw reliable conclusions for the medical treatment of aortic dissection.

Originality/value

This paper presents a non-invasive and reliable method based on a numerical simulation that could be beneficial for the medical management of aortic dissection patients. With this method, clinicians would be able to monitor the patient’s status and make better decisions in the treatment procedure of each patient.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 41 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 16 June 2022

Dejan Živkov and Jasmina Đurašković

This paper aims to investigate how oil price uncertainty affects real gross domestic product (GDP) and industrial production in eight Central and Eastern European countries (CEEC).

1226

Abstract

Purpose

This paper aims to investigate how oil price uncertainty affects real gross domestic product (GDP) and industrial production in eight Central and Eastern European countries (CEEC).

Design/methodology/approach

In the research process, the authors use the Bayesian method of inference for the two applied methodologies – Markov switching generalized autoregressive conditional heteroscedasticity (GARCH) model and quantile regression.

Findings

The results clearly indicate that oil price uncertainty has a low effect on output in moderate market conditions in the selected countries. On the other hand, in the phases of contraction and expansion, which are portrayed by the tail quantiles, the authors find negative and positive Bayesian quantile parameters, which are relatively high in magnitude. This implies that in periods of deep economic crises, an increase in the oil price uncertainty reduces output, amplifying in this way recession pressures in the economy. Contrary, when the economy is in expansion, oil price uncertainty has no influence on the output. The probable reason lies in the fact that the negative effect of oil volatility is not strong enough in the expansion phase to overpower all other positive developments which characterize a growing economy. Also, evidence suggests that increased oil uncertainty has a more negative effect on industrial production than on real GDP, whereas industrial share in GDP plays an important role in how strong some CEECs are impacted by oil uncertainty.

Originality/value

This paper is the first one that investigates the spillover effect from oil uncertainty to output in CEEC.

Details

Applied Economic Analysis, vol. 31 no. 91
Type: Research Article
ISSN: 2632-7627

Keywords

Open Access
Article
Publication date: 25 June 2020

Paula Cruz-García, Anabel Forte and Jesús Peiró-Palomino

There is abundant literature analyzing the determinants of banks’ profitability through its main component: the net interest margin. Some of these determinants are suggested by…

1960

Abstract

Purpose

There is abundant literature analyzing the determinants of banks’ profitability through its main component: the net interest margin. Some of these determinants are suggested by seminal theoretical models and subsequent expansions. Others are ad-hoc selections. Up to now, there are no studies assessing these models from a Bayesian model uncertainty perspective. This paper aims to analyze this issue for the EU-15 countries for the period 2008-2014, which mainly corresponds to the Great Recession years.

Design/methodology/approach

It follows a Bayesian variable selection approach to analyze, in a first step, which variables of those suggested by the literature are actually good predictors of banks’ net interest margin. In a second step, using a model selection approach, the authors select the model with the best fit. Finally, the paper provides inference and quantifies the economic impact of the variables selected as good candidates.

Findings

The results widely support the validity of the determinants proposed by the seminal models, with only minor discrepancies, reinforcing their capacity to explain net interest margin disparities also during the recent period of restructuring of the banking industry.

Originality/value

The paper is, to the best of the knowledge, the first one following a Bayesian variable selection approach in this field of the literature.

Details

Applied Economic Analysis, vol. 28 no. 83
Type: Research Article
ISSN: 2632-7627

Keywords

Open Access
Article
Publication date: 2 September 2019

Pedro Albuquerque, Gisela Demo, Solange Alfinito and Kesia Rozzett

Factor analysis is the most used tool in organizational research and its widespread use in scale validations contribute to decision-making in management. However, standard factor…

1753

Abstract

Purpose

Factor analysis is the most used tool in organizational research and its widespread use in scale validations contribute to decision-making in management. However, standard factor analysis is not always applied correctly mainly due to the misuse of ordinal data as interval data and the inadequacy of the former for classical factor analysis. The purpose of this paper is to present and apply the Bayesian factor analysis for mixed data (BFAMD) in the context of empirical using the Bayesian paradigm for the construction of scales.

Design/methodology/approach

Ignoring the categorical nature of some variables often used in management studies, as the popular Likert scale, may result in a model with false accuracy and possibly biased estimates. To address this issue, Quinn (2004) proposed a Bayesian factor analysis model for mixed data, which is capable of modeling ordinal (qualitative measure) and continuous data (quantitative measure) jointly and allows the inclusion of qualitative information through prior distributions for the parameters’ model. This model, adopted here, presents considering advantages and allows the estimation of the posterior distribution for the latent variables estimated, making the process of inference easier.

Findings

The results show that BFAMD is an effective approach for scale validation in management studies making both exploratory and confirmatory analyses possible for the estimated factors and also allowing the analysts to insert a priori information regardless of the sample size, either by using the credible intervals for Factor Loadings or by conducting specific hypotheses tests. The flexibility of the Bayesian approach presented is counterbalanced by the fact that the main estimates used in factor analysis as uniqueness and communalities commonly lose their usual interpretation due to the choice of using prior distributions.

Originality/value

Considering that the development of scales through factor analysis aims to contribute to appropriate decision-making in management and the increasing misuse of ordinal scales as interval in organizational studies, this proposal seems to be effective for mixed data analyses. The findings found here are not intended to be conclusive or limiting but offer a useful starting point from which further theoretical and empirical research of Bayesian factor analysis can be built.

Details

RAUSP Management Journal, vol. 54 no. 4
Type: Research Article
ISSN: 2531-0488

Keywords

Open Access
Article
Publication date: 10 May 2021

Chao Yu, Haiying Li, Xinyue Xu and Qi Sun

During rush hours, many passengers find it difficult to board the first train due to the insufficient capacity of metro vehicles, namely, left behind phenomenon. In this paper, a…

Abstract

Purpose

During rush hours, many passengers find it difficult to board the first train due to the insufficient capacity of metro vehicles, namely, left behind phenomenon. In this paper, a data-driven approach is presented to estimate left-behind patterns using automatic fare collection (AFC) data and train timetable data.

Design/methodology/approach

First, a data preprocessing method is introduced to obtain the waiting time of passengers at the target station. Second, a hierarchical Bayesian (HB) model is proposed to describe the left behind phenomenon, in which the waiting time is expressed as a Gaussian mixture model. Then a sampling algorithm based on Markov Chain Monte Carlo (MCMC) is developed to estimate the parameters in the model. Third, a case of Beijing metro system is taken as an application of the proposed method.

Findings

The comparison result shows that the proposed method performs better in estimating left behind patterns than the existing Maximum Likelihood Estimation. Finally, three main reasons for left behind phenomenon are summarized to make relevant strategies for metro managers.

Originality/value

First, an HB model is constructed to describe the left behind phenomenon in a target station and in the target direction on the basis of AFC data and train timetable data. Second, a MCMC-based sampling method Metropolis–Hasting algorithm is proposed to estimate the model parameters and obtain the quantitative results of left behind patterns. Third, a case of Beijing metro is presented as an application to test the applicability and accuracy of the proposed method.

Details

Smart and Resilient Transportation, vol. 3 no. 2
Type: Research Article
ISSN: 2632-0487

Keywords

Open Access
Article
Publication date: 5 October 2023

Babitha Philip and Hamad AlJassmi

To proactively draw efficient maintenance plans, road agencies should be able to forecast main road distress parameters, such as cracking, rutting, deflection and International…

Abstract

Purpose

To proactively draw efficient maintenance plans, road agencies should be able to forecast main road distress parameters, such as cracking, rutting, deflection and International Roughness Index (IRI). Nonetheless, the behavior of those parameters throughout pavement life cycles is associated with high uncertainty, resulting from various interrelated factors that fluctuate over time. This study aims to propose the use of dynamic Bayesian belief networks for the development of time-series prediction models to probabilistically forecast road distress parameters.

Design/methodology/approach

While Bayesian belief network (BBN) has the merit of capturing uncertainty associated with variables in a domain, dynamic BBNs, in particular, are deemed ideal for forecasting road distress over time due to its Markovian and invariant transition probability properties. Four dynamic BBN models are developed to represent rutting, deflection, cracking and IRI, using pavement data collected from 32 major road sections in the United Arab Emirates between 2013 and 2019. Those models are based on several factors affecting pavement deterioration, which are classified into three categories traffic factors, environmental factors and road-specific factors.

Findings

The four developed performance prediction models achieved an overall precision and reliability rate of over 80%.

Originality/value

The proposed approach provides flexibility to illustrate road conditions under various scenarios, which is beneficial for pavement maintainers in obtaining a realistic representation of expected future road conditions, where maintenance efforts could be prioritized and optimized.

Details

Construction Innovation , vol. 24 no. 1
Type: Research Article
ISSN: 1471-4175

Keywords

Open Access
Article
Publication date: 21 March 2024

Warisa Thangjai and Sa-Aat Niwitpong

Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty…

Abstract

Purpose

Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty. Their applications encompass economic forecasting, market research, financial forecasting, econometric analysis, policy analysis, financial reporting, investment decision-making, credit risk assessment and consumer confidence surveys. Signal-to-noise ratio (SNR) finds applications in economics and finance across various domains such as economic forecasting, financial modeling, market analysis and risk assessment. A high SNR indicates a robust and dependable signal, simplifying the process of making well-informed decisions. On the other hand, a low SNR indicates a weak signal that could be obscured by noise, so decision-making procedures need to take this into serious consideration. This research focuses on the development of confidence intervals for functions derived from the SNR and explores their application in the fields of economics and finance.

Design/methodology/approach

The construction of the confidence intervals involved the application of various methodologies. For the SNR, confidence intervals were formed using the generalized confidence interval (GCI), large sample and Bayesian approaches. The difference between SNRs was estimated through the GCI, large sample, method of variance estimates recovery (MOVER), parametric bootstrap and Bayesian approaches. Additionally, confidence intervals for the common SNR were constructed using the GCI, adjusted MOVER, computational and Bayesian approaches. The performance of these confidence intervals was assessed using coverage probability and average length, evaluated through Monte Carlo simulation.

Findings

The GCI approach demonstrated superior performance over other approaches in terms of both coverage probability and average length for the SNR and the difference between SNRs. Hence, employing the GCI approach is advised for constructing confidence intervals for these parameters. As for the common SNR, the Bayesian approach exhibited the shortest average length. Consequently, the Bayesian approach is recommended for constructing confidence intervals for the common SNR.

Originality/value

This research presents confidence intervals for functions of the SNR to assess SNR estimation in the fields of economics and finance.

Details

Asian Journal of Economics and Banking, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2615-9821

Keywords

Open Access
Article
Publication date: 8 December 2022

James Christopher Westland

This paper tests whether Bayesian A/B testing yields better decisions that traditional Neyman-Pearson hypothesis testing. It proposes a model and tests it using a large, multiyear…

1222

Abstract

Purpose

This paper tests whether Bayesian A/B testing yields better decisions that traditional Neyman-Pearson hypothesis testing. It proposes a model and tests it using a large, multiyear Google Analytics (GA) dataset.

Design/methodology/approach

This paper is an empirical study. Competing A/B testing models were used to analyze a large, multiyear dataset of GA dataset for a firm that relies entirely on their website and online transactions for customer engagement and sales.

Findings

Bayesian A/B tests of the data not only yielded a clear delineation of the timing and impact of the intellectual property fraud, but calculated the loss of sales dollars, traffic and time on the firm’s website, with precise confidence limits. Frequentist A/B testing identified fraud in bounce rate at 5% significance, and bounces at 10% significance, but was unable to ascertain fraud at the standard significance cutoffs for scientific studies.

Research limitations/implications

None within the scope of the research plan.

Practical implications

Bayesian A/B tests of the data not only yielded a clear delineation of the timing and impact of the IP fraud, but calculated the loss of sales dollars, traffic and time on the firm’s website, with precise confidence limits.

Social implications

Bayesian A/B testing can derive economically meaningful statistics, whereas frequentist A/B testing only provide p-value’s whose meaning may be hard to grasp, and where misuse is widespread and has been a major topic in metascience. While misuse of p-values in scholarly articles may simply be grist for academic debate, the uncertainty surrounding the meaning of p-values in business analytics actually can cost firms money.

Originality/value

There is very little empirical research in e-commerce that uses Bayesian A/B testing. Almost all corporate testing is done via frequentist Neyman-Pearson methods.

Details

Journal of Electronic Business & Digital Economics, vol. 1 no. 1/2
Type: Research Article
ISSN: 2754-4214

Keywords

1 – 10 of 165