Search results1 – 10 of over 31000
It has long been recognised that humans draw from a large pool of processing aids to help manage the everyday challenges of life. It is not uncommon to observe individuals…
It has long been recognised that humans draw from a large pool of processing aids to help manage the everyday challenges of life. It is not uncommon to observe individuals adopting simplifying strategies when faced with ever increasing amounts of information to process, and especially for decisions where the chosen outcome will have a very marginal impact on their well-being. The transactions costs associated with processing all new information often exceed the benefits from such a comprehensive review. The accumulating life experiences of individuals are also often brought to bear as reference points to assist in selectively evaluating information placed in front of them. These features of human processing and cognition are not new to the broad literature on judgment and decision-making, where heuristics are offered up as deliberative analytic procedures intentionally designed to simplify choice. What is surprising is the limited recognition of heuristics that individuals use to process the attributes in stated choice experiments. In this paper we present a case for a utility-based framework within which some appealing processing strategies are embedded (without the aid of supplementary self-stated intentions), as well as models conditioned on self-stated intentions represented as single items of process advice, and illustrate the implications on willingness to pay for travel time savings of embedding each heuristic in the choice process. Given the controversy surrounding the reliability of self-stated intentions, we introduce a framework in which mixtures of process advice embedded within a belief function might be used in future empirical studies to condition choice, as a way of increasingly judging the strength of the evidence.
This paper reviews the current literature on theoretical and methodological issues in discrete choice experiments, which have been widely used in non-market value…
This paper reviews the current literature on theoretical and methodological issues in discrete choice experiments, which have been widely used in non-market value analysis, such as elicitation of residents' attitudes toward recreation or biodiversity conservation of forests.
We review the literature, and attribute the possible biases in choice experiments to theoretical and empirical aspects. Particularly, we introduce regret minimization as an alternative to random utility theory and sheds light on incentive compatibility, status quo, attributes non-attendance, cognitive load, experimental design, survey methods, estimation strategies and other issues.
The practitioners should pay attention to many issues when carrying out choice experiments in order to avoid possible biases. Many alternatives in theoretical foundations, experimental designs, estimation strategies and even explanations should be taken into account in practice in order to obtain robust results.
The paper summarizes the recent developments in methodological and empirical issues of choice experiments and points out the pitfalls and future directions both theoretically and empirically.
We investigate discrepancies between willingness to pay (WTP) and willingness to accept (WTA) in the context of a stated choice experiment. Using data on customer…
We investigate discrepancies between willingness to pay (WTP) and willingness to accept (WTA) in the context of a stated choice experiment. Using data on customer preferences for water services where respondents were able to both ‘sell’ and ‘buy’ the choice experiment attributes, we find evidence of non-linearity in the underlying utility function even though the range of attribute levels is relatively small. Our results reveal the presence of significant loss aversion in all the attributes, including price. We find the WTP–WTA schedule to be asymmetric around the current provision level and that the WTP–WTA ratio varies according to the particular provision change under consideration. Such reference point findings are of direct importance for practitioners and decision-makers using choice experiments for economic appraisal such as cost–benefit analysis, where failure to account for non-linearity in welfare estimates may significantly over- or under-state individual's preferences for gains and avoiding losses respectively.
Stated choice experiments can be used to estimate the parameters in discrete choice models by showing hypothetical choice situations to respondents. These attribute levels…
Stated choice experiments can be used to estimate the parameters in discrete choice models by showing hypothetical choice situations to respondents. These attribute levels in each choice situation are determined by an underlying experimental design. Often, an orthogonal design is used, although recent studies have shown that better experimental designs exist, such as efficient designs. These designs provide more reliable parameter estimates. However, they require prior information about the parameter values, which is often not readily available. Serial efficient designs are proposed in this paper in which the design is updated during the survey. In contrast to adaptive conjoint, serial conjoint only changes the design across respondents, not within-respondent thereby avoiding endogeneity bias as much as possible. After each respondent, new parameters are estimated and used as priors for generating a new efficient design. Results using the multinomial logit model show that using such a serial design, using zero initial prior values, provides the same reliability of the parameter estimates as the best efficient design (based on the true parameters). Any possible bias can be avoided by using an orthogonal design for the first few respondents. Serial designs do not suffer from misspecification of the priors as they are continuously updated. The disadvantage is the extra implementation cost of an automated parameter estimation and design generation procedure in the survey. Also, the respondents have to be surveyed in mostly serial fashion instead of all parallel.
The purpose of this research is to test how varying the numbers of attributes and alternatives affects the use of heuristics and selective information processing in…
The purpose of this research is to test how varying the numbers of attributes and alternatives affects the use of heuristics and selective information processing in discrete choice experiments (DCEs). The effects of visual attribute and alternative non-attendance (NA) on respondent choices are analyzed.
Two laboratory experiments that combined eye tracking and DCEs were conducted with 109 and 117 participants in the USA. The DCEs varied in task complexity by the number of product attributes and alternatives.
Results suggest that participants ignore both single attributes and entire alternatives. Increasing the number of alternatives significantly increased attribute NA. Including NA in choice modeling influenced results more in more complex DCEs.
The current experiments did not test for choice overload. Future studies could investigate more complex designs. The choice environment affects decision-making. Future research could compare laboratory and field experiments.
Private and public sectors often use DCEs to determine consumer preference. Results suggest that DCEs with two alternatives are superior to DCEs with four alternatives because NA was lower in the two-alternative design.
This empirical research examined effects of attribute and alternative NA on choice modeling using eye tracking and DCEs with varying degrees of task complexity. Results suggest that accounting for NA reduces the risk of over- or understating the impact of attributes on choice, in that one avoids claiming significance for attributes that might not truly be preferred, and vice versa.
In cooperation with a German online retail bank, the aim of this paper is to investigate how the bank should price a new fee-only financial advisory service. Two types of pricing plans differ in terms of their strategies for determining monthly prices: a fixed monthly price that is identical for all clients (i.e. a flat pricing plan) or a monthly price that varies as a function of each client's assets under management (i.e. a volume pricing plan).
With a discrete choice experiment, this article studies client preferences for the two types of plans. To ensure that the respondents understood the financial consequences of their decisions, a price calculator was embedded into the discrete choice experiment to enable the respondents to determine their individual monthly prices based on their assets under management.
Methodologically, the price calculator is useful for simplifying mathematically complex decisions, and it provides additional valuable information for analysis. Substantively, the results show that clients perceive both types of pricing plans as equally attractive; however, the service provider's revenues would increase by up to 12 per cent if it uses the volume pricing plan.
This research extends the stream of literature on the measurement of pricing plan preferences and offers guidance for service industries, such as telecommunications, cloud computing services, insurances, or transportation. It extends the use of discrete choice experiments to study client preferences for different pricing plans and also integrates a decision aid, i.e. a price calculator, in the experiment to assist clients in comparing alternatives more effectively.
Currently, the state of practice in experimental design centres on orthogonal designs (Alpizar et al., 2003), which are suitable when applied to surveys with a large…
Currently, the state of practice in experimental design centres on orthogonal designs (Alpizar et al., 2003), which are suitable when applied to surveys with a large sample size. In a stated choice experiment involving interdependent freight stakeholders in Sydney (see Hensher & Puckett, 2007; Puckett et al., 2007; Puckett & Hensher, 2008), one significant empirical constraint was difficult in recruiting unique decision-making groups to participate. The expected relatively small sample size led us to seek an alternative experimental design. That is, we decided to construct an optimal design that utilised extant information regarding the preferences and experiences of respondents, to achieve statistically significant parameter estimates under a relatively low sample size (see Bliemer & Rose, 2006).
The D-efficient experimental design developed for the study is unique, in that it centred on the choices of interdependent respondents. Hence, the generation of the design had to account for the preferences of two distinct classes of decision makers: buyers and sellers of road freight transport. This paper discusses the process by which these (non-coincident) preferences were used to seed the generation of the experimental design, and then examines the relative power of the design through an extensive bootstrap analysis of increasingly restricted sample sizes for both decision-making classes in the sample. We demonstrate the strong potential for efficient designs to achieve empirical goals under sampling constraints, whilst identifying limitations to their power as sample size decreases.