Search results
1 – 10 of 453Dirk Zumkeller, Jean-Loup Madre, Bastian Chlond and Jimmy Armoogum
Cory A. Campbell and Sridhar Ramamoorti
We use design thinking in the context of accounting pedagogy to exploit recent advances in cybernetics in the form of generative artificial intelligence technology. Relying on the…
Abstract
We use design thinking in the context of accounting pedagogy to exploit recent advances in cybernetics in the form of generative artificial intelligence technology. Relying on the intuition that supplementing or augmenting human argumentation (natural intelligence or NI) with parallel AI output can produce better student written assignments, we posit the “augmentation premise,” that is, ((NI + AI) > AI > NI). To test the augmentation premise, we compare student written submissions in an Accounting Information Systems (AIS) course with and without the benefit of parallel generative AI output. We then evaluate how the generative AI output enhances student-crafted revisions to their initial submissions. Using a summative quality improvement index (QII) consisting of quantitative and qualitative assessments, we present preliminary evidence supporting the augmentation premise. The augmentation premise likely extends to other accounting subdisciplines and merits generalization for enriching accounting pedagogy.
Details
Keywords
An important but often overlooked obstacle in multivariate discrete data models is the specification of endogenous covariates. Endogeneity can be modeled as latent or observed…
Abstract
An important but often overlooked obstacle in multivariate discrete data models is the specification of endogenous covariates. Endogeneity can be modeled as latent or observed, representing competing hypotheses about the outcomes being considered. However, little attention has been applied to deciphering which specification is best supported by the data. This paper highlights the use of existing Bayesian model comparison techniques to investigate the proper specification for endogenous covariates and to understand the nature of endogeneity. Consideration of both observed and latent modeling approaches is emphasized in two empirical applications. The first application examines linkages for banking contagion and the second application evaluates the impact of education on socioeconomic outcomes.
Details
Keywords
Mingliang Li and Justin L. Tobias
We describe a new Bayesian estimation algorithm for fitting a binary treatment, ordered outcome selection model in a potential outcomes framework. We show how recent advances in…
Abstract
We describe a new Bayesian estimation algorithm for fitting a binary treatment, ordered outcome selection model in a potential outcomes framework. We show how recent advances in simulation methods, namely data augmentation, the Gibbs sampler and the Metropolis-Hastings algorithm can be used to fit this model efficiently, and also introduce a reparameterization to help accelerate the convergence of our posterior simulator. Conventional “treatment effects” such as the Average Treatment Effect (ATE), the effect of treatment on the treated (TT) and the Local Average Treatment Effect (LATE) are adapted for this specific model, and Bayesian strategies for calculating these treatment effects are introduced. Finally, we review how one can potentially learn (or at least bound) the non-identified cross-regime correlation parameter and use this learning to calculate (or bound) parameters of interest beyond mean treatment effects.
Edward P. Lazear, Kathryn Shaw, Grant Hayes and James Jedras
Wages have been spreading out across workers over time – or in other words, the 90th/50th wage ratio has risen over time. A key question is, has the productivity distribution also…
Abstract
Wages have been spreading out across workers over time – or in other words, the 90th/50th wage ratio has risen over time. A key question is, has the productivity distribution also spread out across worker skill levels over time? Using our calculations of productivity by skill level for the United States, we show that the distributions of both wages and productivity have spread out over time, as the right tail lengthens for both. We add Organization for Economic Co-Operation and Development (OECD) countries, showing that the wage–productivity correlation exists, such that gains in aggregate productivity, or GDP per person, have resulted in higher wages for workers at the top and bottom of the wage distribution. However, across countries, those workers in the upper-income ranks have seen their wages rise the most over time. The most likely international factor explaining these wage increases is the skill-biased technological change of the digital revolution. The new artificial intelligence (AI) revolution that has just begun seems to be having similar skill-biased effects on wages. But this current AI, called “supervised learning,” is relatively similar to past technological change. The AI of the distant future will be “unsupervised learning,” and it could eventually have an effect on the jobs of the most highly skilled.
Details
Keywords
For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic behavior, and…
Abstract
For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic behavior, and can be extended to include features such as structural instability, time-varying parameters, dynamic factors, threshold-crossing behavior, and discrete outcomes. Building upon growing evidence that the assumption of linearity may be undesirable in modeling certain macroeconomic relationships, this article seeks to add to recent advances in VAR modeling by proposing a nonparametric dynamic model for multivariate time series. In this model, the problems of modeling and estimation are approached from a hierarchical Bayesian perspective. The article considers the issues of identification, estimation, and model comparison, enabling nonparametric VAR (or NPVAR) models to be fit efficiently by Markov chain Monte Carlo (MCMC) algorithms and compared to parametric and semiparametric alternatives by marginal likelihoods and Bayes factors. Among other benefits, the methodology allows for a more careful study of structural instability while guarding against the possibility of unaccounted nonlinearity in otherwise stable economic relationships. Extensions of the proposed nonparametric model to settings with heteroskedasticity and other important modeling features are also considered. The techniques are employed to study the postwar U.S. economy, confirming the presence of distinct volatility regimes and supporting the contention that certain nonlinear relationships in the data can remain undetected by standard models.
Details
Keywords
Ian M. McCarthy and Rusty Tchernis
This chapter presents a Bayesian analysis of the endogenous treatment model with misclassified treatment participation. Our estimation procedure utilizes a combination of data…
Abstract
This chapter presents a Bayesian analysis of the endogenous treatment model with misclassified treatment participation. Our estimation procedure utilizes a combination of data augmentation, Gibbs sampling, and Metropolis–Hastings to obtain estimates of the misclassification probabilities and the treatment effect. Simulations demonstrate that the proposed Bayesian estimator accurately estimates the treatment effect in light of misclassification and endogeneity.
Details
Keywords
The authors propose an Markov Chain Monte Carlo (MCMC) method for estimating a class of linear sum assignment problems (LSAP; the discrete case of the optimal transport problems)…
Abstract
The authors propose an Markov Chain Monte Carlo (MCMC) method for estimating a class of linear sum assignment problems (LSAP; the discrete case of the optimal transport problems). Prominent examples include multi-item auctions and mergers in industrial organizations. This contribution is to decompose the joint likelihood of the allocation and prices by exploiting the primal and dual linear programming formulation of the underlying LSAP. Our decomposition, coupled with the data augmentation technique, leads to an MCMC sampler without a repeated model-solving phase.
Details