Search results
1 – 10 of over 2000Kenneth Y. Chay and Dean R. Hyslop
We examine the roles of sample initial conditions and unobserved individual effects in consistent estimation of the dynamic binary response panel data model. Different…
Abstract
We examine the roles of sample initial conditions and unobserved individual effects in consistent estimation of the dynamic binary response panel data model. Different specifications of the model are estimated using female welfare and labor force participation data from the Survey of Income and Program Participation. These include alternative random effects (RE) models, in which the conditional distributions of both the unobserved heterogeneity and the initial conditions are specified, and fixed effects (FE) conditional logit models that make no assumptions on either distribution. There are several findings. First, the hypothesis that the sample initial conditions are exogenous is rejected by both samples. Misspecification of the initial conditions results in drastically overstated estimates of the state dependence and understated estimates of the short- and long-run effects of children on labor force participation. The FE conditional logit estimates are similar to the estimates from the RE model that is flexible with respect to both the initial conditions and the correlation between the unobserved heterogeneity and the covariates. For female labor force participation, there is evidence that fertility choices are correlated with both unobserved heterogeneity and pre-sample participation histories.
Details
Keywords
Florian Englmaier, Nicolai J. Foss, Thorbjørn Knudsen and Tobias Kretschmer
The authors argue that organization design needs to play a more active role in the explanation of differential performance and outline a set of ideas for achieving this both in…
Abstract
The authors argue that organization design needs to play a more active role in the explanation of differential performance and outline a set of ideas for achieving this both in theoretical and empirical research. Firms are heterogeneous in terms of (1) how well they do things, capturing persistent productivity differences, and (2) how they do things – and both reflect firms’ organization design choices. Both types of heterogeneity can be persistent, and are interdependent, although they have typically been studied separately. The authors propose a simple formal framework – the “aggregation function framework” – that aligns organization design thinking with the emphasis on performance heterogeneity among firms that is characteristic of the strategy field. This framework allows for a more precise identification of how exactly organization design may contribute to persistent performance differences, and therefore what exactly are the assumptions that strategy and organization design scholars need to be attentive to.
Details
Keywords
MengQi (Annie) Ding and Avi Goldfarb
This article reviews the quantitative marketing literature on artificial intelligence (AI) through an economics lens. We apply the framework in Prediction Machines: The Simple…
Abstract
This article reviews the quantitative marketing literature on artificial intelligence (AI) through an economics lens. We apply the framework in Prediction Machines: The Simple Economics of Artificial Intelligence to systematically categorize 96 research papers on AI in marketing academia into five levels of impact, which are prediction, decision, tool, strategy, and society. For each paper, we further identify each individual component of a task, the research question, the AI model used, and the broad decision type. Overall, we find there are fewer marketing papers focusing on strategy and society, and accordingly, we discuss future research opportunities in those areas.
Details
Keywords
The study aims at reviewing a synthesis of disclosure, transparency, and International Financial Reporting Standards (IFRS) implementation in an attempt to provide directions for…
Abstract
The study aims at reviewing a synthesis of disclosure, transparency, and International Financial Reporting Standards (IFRS) implementation in an attempt to provide directions for future research. Prior research overwhelmingly supports that the IFRS adoption or effective implementation of IFRS will enhance high-quality financial reporting, transparency, enhance the country’s investment environment, and foreign direct investment (FDI) (Dayanandan, Donker, Ivanof, & Karahan, 2016; Gláserová, 2013; Muniandy & Ali, 2012). However, some researchers provide conflicting evidence that developing countries implementing IFRS are probably not going to encounter higher FDI inflows (Gheorghe, 2009; Lasmin, 2012). It has also been argued that the IFRS adoption decreases the management earnings in countries with high levels of financial disclosure. In general, the study indicates that the adoption of IFRS has improved the financial reporting quality. The common law countries have strong rules to protect investors, strict legal enforcement, and high levels of transparency of financial information. From the extensive structured review of literature using the Scopus database tool, the study reviewed 105 articles, and in particular, the topic-related 94 articles were analysed. All 94 articles were retrieved from a range of 59 journals. Most of the articles (77 of 94) were published 2010–2018. The top five journals based on the citations are Journal of Accounting Research (187 citations), Abacus (125 citations), European Accounting Review (107 citations), Journal of Accounting and Economics (78 citations), and Accounting and Business Research (66 citations). The most-cited authors are Daske, Hail, Leuz, and Verdi (2013); Daske and Gebhardt (2006); and Brüggemann, Hitz, and Sellhorn (2013). Surprisingly, 65 of 94 articles did not utilise the theory. In particular, four theories have been used frequently: agency theory (15), economic theory (5), signalling theory (2), and accounting theory (2). The study calls for future research on the theoretical implications and policy-related research on disclosure and transparency which may inform the local and international standard setters.
Details
Keywords
Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with…
Abstract
Choice under risk has a large stochastic (unpredictable) component. This chapter examines five stochastic models for binary discrete choice under risk and how they combine with “structural” theories of choice under risk. Stochastic models are substantive theoretical hypotheses that are frequently testable in and of themselves, and also identifying restrictions for hypothesis tests, estimation and prediction. Econometric comparisons suggest that for the purpose of prediction (as opposed to explanation), choices of stochastic models may be far more consequential than choices of structures such as expected utility or rank-dependent utility.
“It should also be noted that the objective of convergence and equal distribution, including across under-performing areas, can hinder efforts to generate growth. Contrariwise…
Abstract
“It should also be noted that the objective of convergence and equal distribution, including across under-performing areas, can hinder efforts to generate growth. Contrariwise, the objective of competitiveness can exacerbate regional and social inequalities, by targeting efforts on zones of excellence where projects achieve greater returns (dynamic major cities, higher levels of general education, the most advanced projects, infrastructures with the heaviest traffic, and so on). If cohesion policy and the Lisbon Strategy come into conflict, it must be borne in mind that the former, for the moment, is founded on a rather more solid legal foundation than the latter” European Commission (2005, p. 9)Adaptation of Cohesion Policy to the Enlarged Europe and the Lisbon and Gothenburg Objectives.
Andrew T. Collins and David A. Hensher
There is extensive evidence that decision-makers, faced with increasing information load, may simplify their choice by reducing the amount of information to process. One…
Abstract
Purpose
There is extensive evidence that decision-makers, faced with increasing information load, may simplify their choice by reducing the amount of information to process. One simplification, commonly referred to as attribute non-attendance (ANA), is a reduction of the number of attributes of the choice alternatives. Several previous studies have identified relationships between varying information load and ANA using self-reported measures of ANA. This chapter revisits this link, motivated by recognition in the literature that such self-reported measures are vulnerable to reporting error.
Methodology
This chapter employs a recently developed modelling approach that has been shown to effectively infer ANA, the random parameters attribute non-attendance (RPANA) model. The empirical setting systematically varies the information load across respondents, on a number of dimensions.
Findings
Confirming earlier findings, ANA is accentuated by an increase in the number of attribute levels, and a decrease in the number of alternatives. Additionally, specific attributes are more likely to not be attended to as the total number of attributes increases. Willingness to pay (WTP) under inferred ANA differs notably from when ANA is self-reported. Additionally accounting for varying information load, when inferring ANA, has little impact on the WTP distribution of those that do attend. However, due to varying rates of non-attendance, the overall WTP distribution varies to a large extent.
Originality and value
This is the first examination of the impact of varying information load on inferred ANA that is identified with the RPANA model. The value lies in the confirmation of earlier findings despite the evolution of methodologies in the interim.
Details
Keywords
It has long been recognised that humans draw from a large pool of processing aids to help manage the everyday challenges of life. It is not uncommon to observe individuals…
Abstract
It has long been recognised that humans draw from a large pool of processing aids to help manage the everyday challenges of life. It is not uncommon to observe individuals adopting simplifying strategies when faced with ever increasing amounts of information to process, and especially for decisions where the chosen outcome will have a very marginal impact on their well-being. The transactions costs associated with processing all new information often exceed the benefits from such a comprehensive review. The accumulating life experiences of individuals are also often brought to bear as reference points to assist in selectively evaluating information placed in front of them. These features of human processing and cognition are not new to the broad literature on judgment and decision-making, where heuristics are offered up as deliberative analytic procedures intentionally designed to simplify choice. What is surprising is the limited recognition of heuristics that individuals use to process the attributes in stated choice experiments. In this paper we present a case for a utility-based framework within which some appealing processing strategies are embedded (without the aid of supplementary self-stated intentions), as well as models conditioned on self-stated intentions represented as single items of process advice, and illustrate the implications on willingness to pay for travel time savings of embedding each heuristic in the choice process. Given the controversy surrounding the reliability of self-stated intentions, we introduce a framework in which mixtures of process advice embedded within a belief function might be used in future empirical studies to condition choice, as a way of increasingly judging the strength of the evidence.