Search results

1 – 10 of 202
Article
Publication date: 1 August 2016

Marija Vištica, Ani Grubišic and Branko Žitko

In order to initialize a student model in intelligent tutoring systems, some form of initial knowledge test should be given to a student. Since the authors cannot include all…

Abstract

Purpose

In order to initialize a student model in intelligent tutoring systems, some form of initial knowledge test should be given to a student. Since the authors cannot include all domain knowledge in that initial test, a domain knowledge subset should be selected. The paper aims to discuss this issue.

Design/methodology/approach

In order to generate a knowledge sample that represents truly a certain domain knowledge, the authors can use sampling algorithms. In this paper, the authors present five sampling algorithms (Random Walk, Metropolis-Hastings Random Walk, Forest Fire, Snowball and Represent algorithm) and investigate which structural properties of the domain knowledge sample are preserved after sampling process is conducted.

Findings

The samples that the authors got using these algorithms are compared and the authors have compared their cumulative node degree distributions, clustering coefficients and the length of the shortest paths in a sampled graph in order to find the best one.

Originality/value

This approach is original as the authors could not find any similar work that uses graph sampling methods for student modeling.

Details

The International Journal of Information and Learning Technology, vol. 33 no. 4
Type: Research Article
ISSN: 2056-4880

Keywords

Article
Publication date: 1 November 2022

Hanieh Panahi

The study based on the estimation of the stress–strength reliability parameter plays a vital role in showing system efficiency. In this paper, considering independent strength and…

Abstract

Purpose

The study based on the estimation of the stress–strength reliability parameter plays a vital role in showing system efficiency. In this paper, considering independent strength and stress random variables distributed as inverted exponentiated Rayleigh model, the author have developed estimation procedures for the stress–strength reliability parameter R = P(X>Y) under Type II hybrid censored samples.

Design/methodology/approach

The maximum likelihood and Bayesian estimates of R based on Type II hybrid censored samples are evaluated. Because there is no closed form for the Bayes estimate, the author use the MetropolisHastings algorithm to obtain approximate Bayes estimate of the reliability parameter. Furthermore, the author construct the asymptotic confidence interval, bootstrap confidence interval and highest posterior density (HPD) credible interval for R. The Monte Carlo simulation study has been conducted to compare the performance of various proposed point and interval estimators. Finally, the validity of the stress–strength reliability model is demonstrated via a practical case.

Findings

The performance of various point and interval estimators is compared via the simulation study. Among all proposed estimators, Bayes estimators using MHG algorithm show minimum MSE for all considered censoring schemes. Furthermore, the real data analysis indicates that the splashing diameter decreases with the increase of MPa under different hybrid censored samples.

Originality/value

The frequentist and Bayesian methods are developed to estimate the associated parameters of the reliability model under the hybrid censored inverted exponentiated Rayleigh distribution. The application of the proposed stress–strength reliability model will help the reliability engineers and also other scientists to estimate the system reliability.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 19 October 2020

Sophia Ding and Peter H. Egger

This chapter proposes an approach toward the estimation of cross-sectional sample selection models, where the shocks on the units of observation feature some interdependence…

Abstract

This chapter proposes an approach toward the estimation of cross-sectional sample selection models, where the shocks on the units of observation feature some interdependence through spatial or network autocorrelation. In particular, this chapter improves on prior Bayesian work on this subject by proposing a modified approach toward sampling the multivariate-truncated, cross-sectionally dependent latent variable of the selection equation. This chapter outlines the model and implementation approach and provides simulation results documenting the better performance of the proposed approach relative to existing ones.

Book part
Publication date: 1 December 2016

Raffaella Calabrese and Johan A. Elkink

The most used spatial regression models for binary-dependent variable consider a symmetric link function, such as the logistic or the probit models. When the dependent variable…

Abstract

The most used spatial regression models for binary-dependent variable consider a symmetric link function, such as the logistic or the probit models. When the dependent variable represents a rare event, a symmetric link function can underestimate the probability that the rare event occurs. Following Calabrese and Osmetti (2013), we suggest the quantile function of the generalized extreme value (GEV) distribution as link function in a spatial generalized linear model and we call this model the spatial GEV (SGEV) regression model. To estimate the parameters of such model, a modified version of the Gibbs sampling method of Wang and Dey (2010) is proposed. We analyze the performance of our model by Monte Carlo simulations and evaluate the prediction accuracy in empirical data on state failure.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

Article
Publication date: 12 July 2019

Victor Lapshin

This paper aims to illustrate how a Bayesian approach to yield fitting can be implemented in a non-parametric framework with automatic smoothing inferred from the data. It also…

Abstract

Purpose

This paper aims to illustrate how a Bayesian approach to yield fitting can be implemented in a non-parametric framework with automatic smoothing inferred from the data. It also briefly illustrates the advantages of such an approach using real data.

Design/methodology/approach

The paper uses an infinite dimensional (functional space) approach to inverse problems. Numerical computations are carried out using a Markov Chain Monte-Carlo algorithm with several tweaks to ensure good performance. The model explicitly uses bid-ask spreads to allow for observation errors and provides automatic smoothing based on them.

Findings

A non-parametric framework allows to capture complex shapes of zero-coupon yield curves typical for emerging markets. Bayesian approach allows to assess the precision of estimates, which is crucial for some applications. Examples of estimation results are reported for three different bond markets: liquid (German), medium liquidity (Chinese) and illiquid (Russian).

Practical implications

The result shows that infinite-dimensional Bayesian approach to term structure estimation is feasible. Market practitioners could use this approach to gain more insight into interest rates term structure. For example, they could now be able to complement their non-parametric term structure estimates with Bayesian confidence intervals, which would allow them to assess statistical significance of their results.

Originality/value

The model does not require parameter tuning during estimation. It has its own parameters, but they are to be selected during model setup.

Details

Studies in Economics and Finance, vol. 36 no. 3
Type: Research Article
ISSN: 1086-7376

Keywords

Book part
Publication date: 24 March 2006

Hedibert Freitas Lopes and Esther Salazar

In this paper, we propose a Bayesian approach to model the level and the variance of (financial) time series by the special class of nonlinear time series models known as the…

Abstract

In this paper, we propose a Bayesian approach to model the level and the variance of (financial) time series by the special class of nonlinear time series models known as the logistic smooth transition autoregressive models, or simply the LSTAR models. We first propose a Markov Chain Monte Carlo (MCMC) algorithm for the levels of the time series and then adapt it to model the stochastic volatilities. The LSTAR order is selected by three information criteria: the well-known AIC and BIC, and by the deviance information criteria, or DIC. We apply our algorithm to a synthetic data and two real time series, namely the canadian lynx data and the SP500 returns.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-1-84950-388-4

Book part
Publication date: 21 February 2008

Junni L. Zhang, Donald B. Rubin and Fabrizia Mealli

In an evaluation of a job training program, the causal effects of the program on wages are often of more interest to economists than the program's effects on employment or on…

Abstract

In an evaluation of a job training program, the causal effects of the program on wages are often of more interest to economists than the program's effects on employment or on income. The reason is that the effects on wages reflect the increase in human capital due to the training program, whereas the effects on total earnings or income may be simply reflecting the increased likelihood of employment without any effect on wage rates. Estimating the effects of training programs on wages is complicated by the fact that, even in a randomized experiment, wages are truncated by nonemployment, i.e., are only observed and well-defined for individuals who are employed. We present a principal stratification approach applied to a randomized social experiment that classifies participants into four latent groups according to whether they would be employed or not under treatment and control, and argue that the average treatment effect on wages is only clearly defined for those who would be employed whether they were trained or not. We summarize large sample bounds for this average treatment effect, and propose and derive a Bayesian analysis and the associated Bayesian Markov Chain Monte Carlo computational algorithm. Moreover, we illustrate the application of new code checking tools to our Bayesian analysis to detect possible coding errors. Finally, we demonstrate our Bayesian analysis using simulated data.

Details

Modelling and Evaluating Treatment Effects in Econometrics
Type: Book
ISBN: 978-0-7623-1380-8

Article
Publication date: 16 April 2018

Garrison N. Stevens, Sez Atamturktur, D. Andrew Brown, Brian J. Williams and Cetin Unal

Partitioned analysis is an increasingly popular approach for modeling complex systems with behaviors governed by multiple, interdependent physical phenomena. Yielding accurate…

Abstract

Purpose

Partitioned analysis is an increasingly popular approach for modeling complex systems with behaviors governed by multiple, interdependent physical phenomena. Yielding accurate representations of reality from partitioned models depends on the availability of all necessary constituent models representing relevant physical phenomena. However, there are many engineering problems where one or more of the constituents may be unavailable because of lack of knowledge regarding the underlying principles governing the behavior or the inability to experimentally observe the constituent behavior in an isolated manner through separate-effect experiments. This study aims to enable partitioned analysis in such situations with an incomplete representation of the full system by inferring the behavior of the missing constituent.

Design/methodology/approach

This paper presents a statistical method for inverse analysis infer missing constituent physics. The feasibility of the method is demonstrated using a physics-based visco-plastic self-consistent (VPSC) model that represents the mechanics of slip and twinning behavior in 5182 aluminum alloy. However, a constituent model to carry out thermal analysis representing the dependence of hardening parameters on temperature is unavailable. Using integral-effect experimental data, the proposed approach is used to infer an empirical constituent model, which is then coupled with VPSC to obtain an experimentally augmented partitioned model representing the thermo-mechanical properties of 5182 aluminum alloy.

Findings

Results demonstrate the capability of the method to enable model predictions dependent upon relevant operational conditions. The VPSC model is coupled with the empirical constituent, and the newly enabled thermal-dependent predictions are compared with experimental data.

Originality/value

The method developed in this paper enables the empirical inference of a functional representation of input parameter values in lieu of a missing constituent model. Through this approach, development of partitioned models in the presence of uncertainty regarding a constituent model is made possible.

Details

Engineering Computations, vol. 35 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 1 January 2008

Cathy W.S. Chen, Richard Gerlach and Mike K.P. So

It is well known that volatility asymmetry exists in financial markets. This paper reviews and investigates recently developed techniques for Bayesian estimation and model…

Abstract

It is well known that volatility asymmetry exists in financial markets. This paper reviews and investigates recently developed techniques for Bayesian estimation and model selection applied to a large group of modern asymmetric heteroskedastic models. These include the GJR-GARCH, threshold autoregression with GARCH errors, TGARCH, and double threshold heteroskedastic model with auxiliary threshold variables. Further, we briefly review recent methods for Bayesian model selection, such as, reversible-jump Markov chain Monte Carlo, Monte Carlo estimation via independent sampling from each model, and importance sampling methods. Seven heteroskedastic models are then compared, for three long series of daily Asian market returns, in a model selection study illustrating the preferred model selection method. Major evidence of nonlinearity in mean and volatility is found, with the preferred model having a weighted threshold variable of local and international market news.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Content available
Book part
Publication date: 1 January 2004

Abstract

Details

New Directions in Macromodelling
Type: Book
ISBN: 978-1-84950-830-8

1 – 10 of 202