Quantitative methods for risk management in the real estate development industry

,

Journal of Property Investment & Finance

ISSN: 1463-578X

Article publication date: 21 September 2012

3959

Citation

GleiBner, W. and Wiegelmann, T. (2012), "Quantitative methods for risk management in the real estate development industry", Journal of Property Investment & Finance, Vol. 30 No. 6. https://doi.org/10.1108/jpif.2012.11230faa.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2012, Emerald Group Publishing Limited


Quantitative methods for risk management in the real estate development industry

Article Type: Practice Briefing From: Journal of Property Investment & Finance, Volume 30, Issue 6

Risk measures, risk aggregation and performance measures

1. Introduction

Investment in real estate is based on dynamic, uncertain and complex assumptions. This is especially true for real estate development given its speculative and entrepreneurial activity. Factors such as unknown future demand, risks and uncertainty are key elements of real estate development (Byrne, 1996; Isaac et al., 2010; Schulte and Bone-Winkel, 2002). Effective risk management is thus a decisive strategic success factor. Though not always evident during periods of strong economic growth, it is undoubtedly of paramount importance during economic downturns. The global financial crisis and the deterioration in real estate markets across large parts of Europe in 2008/2009 clearly demonstrated the significance of the real estate industry for the world economy. Despite the structural significance of real estate to the economy and even though risk management has been widely analyzed in academic research, there remains limited substantive research on risk management that pertains directly to real estate development. Further, even less empirical data exists that can provide an overview of industry practice with respect to risk management by major development organisations (RICS, 2003; Shun, 2000). This paper provides a comprehensive overview of risk quantification, which may be expressed by probability distributions, and the calculation of risk measures, such as “capital requirements” (value-at-risk). A key technique is the aggregation of risks by means of simulation that creates transparency about planning certainty.

2. Definition of risk management and the importance of a sound risk assessment

In the following risk is generally referred to as:

[…] the uncertainty expressed through the significance and likelihood of events and their outcomes that could have a material effect on the goals of a real estate development organisation over a stated time horizon.

in accordance with the definition provided by Wiegelmann (2012a) and is based on COSO (2004).

Enterprise risk management is defined by COSO (2004) as:

[…] a process, effected by an entity’s board of directors, management and other personnel, applied in strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage risk to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives (COSO 2004, executive summary, p. 4).

Recent risk management standards and guidelines include inter alia: the risk management standards of the Canadian Standards Association (1997), the Standards Australian and Standards New Zealand (2004) or the Federation of European risk Management Associations (2002). The Committee of Sponsoring Organisations of the Treadway Commission (known as COSO) provides a comprehensive guide to effective risk management. Also, the International Organization for Standardization (ISO) has published the so-called ISO 31000: “Risk Management – Principles and Guidelines”. All these standards are comparable in respect to a sound risk management process. In general, each risk management framework constitutes a permanent, dynamic and systematic process in the sense of a control loop, with the risk management process essentially consisting of four phases, namely identification, assessment, control and monitoring (Figure 1).

 Figure 1 Risk management process

Figure 1 Risk management process

Although each individual framework has these four core elements in common, the terminologies, individual components as well as the complexity of the control loop vary. The goal of the risk identification process is to identify possible risks, which may affect, either negatively or positively, the objectives of the business and the activity under analysis. Risk assessment is defined as the overall process of risk analysis and risk evaluation and helps in determining which risks have a greater consequence and impact than others as well as the probability of the event occurring. This is followed by the risk control phase, which evaluates whether the level of risk found during the assessment process requires management attention. Risk monitoring is the periodic tracking of risks and reviews the effectiveness of the treatment plan.

Risk assessment is the process of evaluating identified risks and the interrelation between risks. During risk assessment the individual risk situation of a real estate project, standing asset or portfolio is mapped and forms the basis for the subsequent formulation of risk management and control strategies. It is necessary to quantify risks in order to assess the potential economic value of risk management, in particular to ensure a quantifiable quality improvement of corporate decisions by weighing up the expected returns and risks. A sound understanding about possible outcomes for a development project is crucial for the developer to judge about an adequate level of return to compensate for the risks (Atherton et al., 2008).

The meaningfulness of the assessment models in the context of real estate development used depends significantly on the amount of data available and the specific data quality (Wiegelmann, 2012b). The methods used for risk assessment depend on the quantity and quality of the available information. Assessment methods can be broken down into quantitative methods and qualitative methods. The quantitative approaches are based on mathematical methods and only apply if sufficient risk-specific data is available. In the ideal scenario and where sufficient data is available, both significance and likelihood of an event can be derived on a quantitative, and therefore more objective, basis.

Quantitative assessment techniques can be broken down into benchmarking, probabilistic and non-probabilistic methods (COSO, 2004). The most rudimentary form of risk analysis takes the form of simple adjustments of development variables along the lines of a worst- and best-case scenario. For example, construction costs can be calculated higher than current estimates and rental values can be calculated lower than current figures. However, such rudimentary risk adjustment is deterministic and highly subjective, leading to rather questionable estimates. A more systematic approach to risk analysis is sensitivity analysis. Sensitivity analysis examines the effects on profitability of changes (such as high, low and medium values) of any of the key variables. It identifies the key variables and how changes in individual variables might impact the final value. Scenario testing is a methodical improvement on sensitivity analysis. Its aim is to examine how a combination of changes in the development variables of an appraisal affects its outcome.

The consistent application of risk evaluation methods is a critical success factor for efficient risk management. What needs to be analyzed is the probability of occurrence, the possible extent of occur-ring risks and, at best, their quantification. The question is, however, how established the respective approaches are in business practice.

In an empirical survey of 69 (response rate of 43.7 percent) of the leading European real estate development companies in Germany, France, the UK, Italy, Austria, Spain and Switzerland conducted by Thomas Wiegelmann in 2005, 98.5 percent of the survey participants (n=67) stated that an efficient risk management is the basis for sustainable success in real estate development. At the same time, 44.1 percent of respondents (n=68) stated that, from their perspective, they did not have a sufficiently comprehensive concept for risk assessment. With respect to approaches and possible methods of risk evaluation, Figure 2 shows the distribution of answers (multiple responses were possible).

 Figure 2 Risk assessment methods

Figure 2 Risk assessment methods

69.9 percent of the surveyed real estate development organizations (n=69) approach risk assessment from the primarily subjective and intuitive perspective of the developer. The personal experience of the evaluating person and his or her “common sense” or “gut feeling” is accorded a high value in this regard. Further established methods are scenario techniques as well as sensitivity analyses with each 43.5 percent. Reasons for the preference for these evaluation approaches and processes lie most likely in their practical and comparatively simple application. Traditional sensitivity analysis does typically assume best, business and worst case scenarios and their outcome on the expected outcomes, not taking into consideration uncertainty and the possible range of outcomes that may result (Atherton et al., 2008). It becomes obvious that simulation-based methods with 10.1 percent and value-at-risk approaches with 7.2 percent can hardly be considered widely established methods. The survey results indicate developers typical approach towards the management of risks tends to be characterized by a lack of formalization and coordination, relying largely on an individual judgment and experience. The survey results therefore indicate a considerable potential for improvement with respect to probability-based risk assessment.

The level of risk in real estate development projects and also standing assets and portfolios determines the risk-based capital structure (capital requirements), the probability of default (rating), minimum required expected rate of return (WACC) and the project value. Also the upper limit for the costs of risk management measures (e.g. insurance) is determined through the risk level. As a measure of success, profitability and risk are typically expressed in one key figure, a number, a dollar amount (in €) or a return. Risks need to be quantified. It is a major challenge for risk analysis and risk management to describe risks quantitatively by appropriate “probability distributions” or “stochastic processes”.

The quantitative description of a risk requires the use of frequency or probability distributions (and accordingly in the case of multi-period stochastic process). Risk therefore deals with the inevitable possible deviations from plan because of the uncertain foreseeable future, which incorporates opportunities (potential positive deviations) and threats (potential negative deviations). To make risk easy to calculate, it is necessary to express various kinds of risks with a (positive and real) number that is easily interpretable – the so-called risk measure, which enables prioritization of risks.

This paper will explain the most important methods in the context of the quantification of risks. First, in Section 2 the need to quantify risks – or the impossibility of the non-quantification of risks will be discussed. Section 3 shows, that predictable positive or negative deviations (opportunities and threats) should not only be the basis for quantifying risk, but demonstrate that risk quantification and prediction systems are closely connected to each other. Section 4 will briefly introduce the most important probability distributions that are suitable for the quantitative description of risks. Section 5 deals with the central technique of simulation-based risk aggregation for risk management, thus determining the total risk (e.g. capital requirements) based on quantified individual risks. Based on that, Section 6 explains by which risk measures the total risk can be expressed. The most important risk ratios are presented with these risk measures and reference is made to measures of risk-bearing capacity (such as the equity ratio). Finally, Section 7 deals with performance measures, i.e. with success benchmarks, constructed by the combination of:

  • a measure of risk; and

  • an earnings benchmark (expected value) – such as enterprise value (discounted cash flow), or a “risk adjusted” economic value added (EVA).

3. The non-quantification of risks is not possible

It is obviously desirable to have sufficient relevant historical data at hand which can be analyzed by suitable statistical methods when risks are to be quantified. However, often, the question arises how to proceed if no adequate data and comparables are available. The empirical evidence presented in this paper, proves that many real estate organizations have not established suitable methods to quantitatively describe inherent risks.

According to Sinn (1980) the varying degrees of uncertainty (risk, probability) are always attributable to the scenario of a “certainly known objective probability”, which may subsequently be used for all further analyses and decisions. In the absence of other information, all potential situations are regarded as equally probable due to the principle of insufficient reason.

This means that all risks are quantifiable even if no information exists. The absolute ignorance regarding a risk threatening an investment would have to be expressed as follows: the likelihood of risk is between 0 and 100 percent. And the extent of damage, if it occurs, is between 0 and about €200 trillion, which is the equivalent of all property items on earth. One would probably quite quickly reach the conclusion that it is possible to restrict such two ranges significantly based on the information available. The realistic ranges should be determined based on information available and in light of the heterogeneous assessment of various experts. Making decisions aligned to risks should only be based on the information available that is presented in an adequate manner – pseudo-accuracy is never desirable.

Where a decision maker is confronted with insufficient probability data, higher-level probabilities will have to be applied (Sinn, 1980):

  • where the probability distribution of probabilities is known, they may be translated for an objective first-level probability that is implicitly present; and

  • where, however, no probability of any level is known for the validity of a probability distribution, a uniform distribution that is known with certainty will have to be assumed.

The decision principle of “insufficient reason” (Sinn, 1980, p. 36) may thus be described as follows:

In the event of completely unknown probabilities for the status categories of the world, the decision maker will have to assess outcome vectors:

  • as if each status category occurred with equal probability, and

  • as if such probability was an objective quantity that is known with certainty.

In real business situations, because information is incomplete and historical data is limited, it is often not easy to decide by what probability distributions a risk may be quantitatively described in an adequate way (Gleißner, 2011b).

Neglecting parameter uncertainties (meta risks) may result in inappropriate risk underestimations. Capturing such meta risks is imperative in order to facilitate a correct assessment of the risk scope of a company or an investment project.

A distinction is made based on whether the type of probability distribution or the parameters, respectively, are known or unknown.

 Figure 3 Risk and planning certainty

Figure 3 Risk and planning certainty

In the classic decision theory risk scenario both the probability distribution type and all parameters are known with certainty (Figure 3):

  • A meta risk of type I underlies the scenario in which the probability distribution may be assumed as being known with certainty, whereas the parameters themselves by their nature are random variables.

  • For the meta risk of type II it is assumed that several probability distributions (with parameters known with certainty) are considered possible, whereas the probability that a corresponding distribution exists is unknown. This means there is also a second-level probability; thus it is necessary to model a probability distribution describing the probability for the respective first-level probability distribution.

  • The meta risk of type III combines the scenarios of type I and type II. This means there is initially uncertainty as to the validity of a probability distribution (which calls for a probability distribution by means of the probability distribution), und for each of the (first-level) probability distributions there is again uncertainty about the model parameters, which here are also, again, regarded as random variables (Table I).

Table I Meta risk types

Risk analysis and risk management procedures must be pragmatic, as can also be shown theoretically: risk quantification (and consequently decisions in line with risks) shall always be based on best available information or information to be made available with adequate effort. Using subjective (expert) assessments as the risk quantification basis is acceptable in principle too. It should, however, be ensured in general that data quality, to the extent possible and economically sensible, is improved, for example by imposing a “constraint to give reasons” on experts or using a number of information sources.

In the end, however, a more or less pronounced “meta risk” will always remain, or in other words the danger that a risk may have been quantified incorrectly. It is irrelevant if a situation was predictable without certainty or could not be predicted merely because of lack of information. Such uncertainty about the risk scope may – sensibly – be accounted for by presenting a parameter uncertainty in an explicit manner.

Data quality deficits therefore do not pose a problem in terms of the risk quantification; a problem however arises where the implications of available data, and their quality for the risk scope relevant to a decision, are ignored due to specialised method-related inability or psychology-related unwillingness.

The conclusion therefore is that uncertainty about the risk scope and deficits in the data basis available for risk quantification will have to be considered in the decision-making process. “Worse data quality” itself will have a risk-increasing effect and may, for example, be accounted for by explicitly stating the parameter uncertainty of a probability distribution. Neither uncertainty about the degree of a specific risk (of a probability distribution) nor poor data, however, justify forgoing a risk quantification.

4. Risk quantification and prediction models, expectation formation and time series analysis

Already the definition of risk as possibility of a planning variance shows that risks can be quantified in terms of a particular, as explicit as possible to the planned value.

Risks are the result of the uncertain, (partially) unforeseeable future and planned values and therefore the result of forecasts. Thus, planning, forecasting and risk management systems are inevitably linked together. It is not task of risk management (as can be read occasionally) to predict or forecast future development. This is the job of a forecasting system. And quantifying risk is dealing with the question to what extent deviations may arise from a (best) forecast. Risk is the possibility of deviation from plan. In so-called “stochastic” planning or forecasting models (e.g. a “stochastic corporate planning”) all (important) forecasts will be described by random variables, so that expected value and risk measure can be derived from a common basis. The former expresses what could happen “on average” and the risk measure describes the size of possible deviations.

The so-called “unbiased” planned values, i.e. the forecasts that are “on average” may occur, usually cannot be determined without the knowledge about opportunities and threats (risks). Besides the “most probable values” the less likely scenarios, the potential positive and negative deviations, should also be taken into account. It is essential that business decisions (e.g. investment appraisals) are to be made on the basis of expected values- and not just on the basis of a most likely value (or median). Before the level of risk is quantified, a possible meaningful (unbiased) planned value or projection should be determined, which is required by good forecasts, whereas “bad” (e.g. biased) forecasts lead to an overestimation of the true risk volume.

For the quantification of risks it is meaningful to separate the changes of variables in an expected from an unexpected component, which represents the risk volume. Not the change in a variable but the amount of unexpected change in the variable determines the risk.

After quantifying the risk the possible deviations from predictions (residuals, time series innovations) are only considered and described through an appropriate probability distribution (Section 4). The risk measures (such as the standard deviation) relate to unpredictable (unforeseeable) deviations – what is predictable is not a risk.

5. Quantitative description of risks through probability distributions

Under risk quantification one can understand the quantitative description of risk and the derivation of a risk measure (an index/benchmark as presented in the following Section), which makes risks comparable.

Basically, a risk should first be described by a suitable (mathematical) distribution function. Risks are often described by likelihood and amount of loss occurring, which is a so-called binomial distribution (digital distribution). Some risks, such as variation in maintenance costs and interest expenses, which may vary in amount with different probabilities, are however described by different distribution functions (e.g. a normal distribution with mean and standard deviation). The binomial distribution, normal distribution, and triangular distribution are the most important distribution functions in risk management in practice (Albrecht and Maurer, 2005; Gleißner, 2011a) (Figure 4).

 Figure 4 Quantitative description

Figure 4 Quantitative description

Binomial distribution

The binomial distribution describes the probability that in n-times repeating, a so-called Bernoulli experiment, the event A occurs exactly k times. A Bernoulli experiment is characterized in that exactly two events A and B occur with probability p and 1−p, these probabilities do not change during the experiment and the individual trials are independent. An example of the occurrence of this probability distribution is tossing a coin repeatedly.

A special case of the binomial distribution is the “digital distribution”. Here are two possible events consisting of the values, zero and one. In practice the risk is often described by the likelihood and the amount of loss occurring (within a specified period).

Normal distribution

The normal distribution is common in practice. This follows the so-called central limit theorem.

This means that a random variable is approximately normally distributed if this random variable can be understood as the sum of a large number of independent, smaller “individual risks”. For example, an organization has a large number of almost equally significant customers whose buying patterns are not dependent on each other. We can assume that deviations from the planned sales will be approximately normally distributed. It is therefore in such a case unnecessary to consider each customer individually, but the total turnover can be analyzed. The normal distribution is described by the expected value (μ), which indicates what “on average” happens, and standard deviation as a measure of the “usual” dispersion around the expected value.

Triangular distribution

The triangular distribution can be applied without deep statistical knowledge – an intuitively simple quantitative description of the risk of plan variables, such as a cost position. Only three values for the risk-bearing variables must be specified: the minimum value of a, the probable value b, and the maximum value c. This means that an estimation of probability is not required. This implies through the three specified values and the type of distribution. The description of a risk with these three values is similar to in practice the kind of scenario technique, but probability density for all possible values between the minimum and maximum is here calculated. The following figure shows a triangular distribution as an example of the loss of key personnel (Figure 5).

 Figure 5 Distribution for the loss of key personnel

Figure 5 Distribution for the loss of key personnel

The quantification of risk in this case shows a loss of up to €125,000, if a key person would leave the organization. However, there could be no increase in costs. €50,000 is the most likely cost.

The expected value of a triangular distribution is calculated as: ((a+b+c)/3), and the standard deviation as: √(a2+b2+c2abacbc)/18.

In addition to above mentioned, a whole range of probability distributions exists and is important in risk management practice. For the quantitative description of “extreme risks” (such as “crashes” or natural disasters) usually the (generalized) Pareto-distribution is applied (Zeder, 2007).

Instead of the direct description of risk through the (monetary) effects within a planning period (e.g. one year) the description could also be through two probability distributions, which should be first aggregated: one probability distribution for frequency of loss and the other for the (also uncertain) amount of loss per claim, which is common in insurable (event-driven) risks. For the mapping of more complex problems combination of two distributions may also be appropriate. We can for example describe a risk by a combination of the binomial distribution and the triangular distribution in a liability process/suit. First, what is the likelihood that we will lose the process (binomial distribution). Second, the possible loss amount will be specified given a minimum value, probability value, and maximum value.

For the evaluation of the risk we can apply risk effects (losses) occurred in the past, and use either benchmark values in the industry or self-constructed (realistic) loss scenarios, which accurately describe and explain possible quantitative impact on the organization’s performance. As a matter of principle, the impact on the development of sales and costs are considered.

So far, probability distributions describe the effect of risk at a point in time or in one period in the context of the quantitative description of the risk observed. The effect of many risks is certainly not limited to a date or a period. For example, to capture the exchange rate risk adequately, the entire uncertain future development of the underlying (exogenous) risk factors, such as dollar exchange rate, should be taken into account. The interdependency of the risk effect from period to period is therefore considered. For example, an (unexpected) change in the dollar exchange rate in 2011 would have an impact on the exchange rate in the following year 2012. The dollar exchange rate at the end of 2011 is namely the starting rate for 2012. In order to describe the time evolution of uncertain target parameters or exogenous risk factors, the so-called “stochastic process” is therefore necessary, which could be described as a “multi-period probability distributions” (for the stochastic process, see for example Albrecht and Maurer, 2005).

The following simple example shows how risks can be described quantitatively. The risk of potential losses is considered caused in a product-related liability process. Here two probability distributions are combined. First, the probability is estimated whether the legal process is lost at all (binomial distribution). Based on an expert survey the CEO’s estimate the probability to lose the case is 30 percent. The amount of the compensation payment in case of loss is also uncertain. This is estimated by:

  • minimum value of 1 million;

  • the likely value of 2 million; and

  • maximum value of 5 million (triangular distribution).

In the determination of probabilities and bandwidths, different information sources (different expert estimates) are used, as the heterogeneity of the expert estimates reveals valuable information about the risk perception. It is also possible (and often useful) to present the uncertainty about the probability of loss occurring on its own (parameter uncertainty). For example, the probability that the process is lost could be described also by range of possibilities for this outcome (from 20 to 40 percent).

It should be finally noted that problems occur especially in a situation when data is insufficient or there is a need for the use of subjective estimates (Section 2), that is, the risk quantification itself must be estimated to be risky. There is therefore a “risk of second degree” (“meta risk”). This issue will be explained in detail in section “data problems and uncertain probability distributions in risk management”.

In general terms, there are very flexible ways to describe each kind of risk by adequate probability distribution. It is not appropriate to determine a priori the type of probability distribution.

6. Stochastic planning and risk aggregation using Monte Carlo simulation

The objective of risk aggregation is now to determine the overall risk position of a project or an organization. The probability distributions of individual risks and a probability distribution of the target return to the organization (e.g. earnings or cash flow) will be combined. In a next step the risk measures for the entire organization can be determined, which characterize the total risk (Section 6).

The evaluation of the overall risk enables us to make a statement about whether an organization’s risk-bearing capacity is sufficient to carry the risk and thus ensure the survival of the organization in the long term. Should the existing risk as identified, exceed the risk-bearing capacity of the organization, the additional measure of risk management is required.

The risk aggregations are provided by the simulation, which is described by probability distributions of risks in the context of corporate planning, i.e. it is shown in each case, which position in the planning (succession planning) may cause dispersion. With the help of risk simulation methods (Monte Carlo simulation) a great representative number of possible risk-related future scenarios can be calculated and analyzed. So that it is possible to draw conclusions on the overall risk volume, the plan hedging, and the realistic bandwidth, e.g. the business performance.

A key benefit of using the Monte Carlo simulation is that it allows the developer to achieve enhanced comprehensiveness and understanding on the risk position (Loizou and French, 2012).

The Monte Carlo simulation provides a large “representative sample” of the risk-induced possible future scenarios of the organization, which is then analyzed. Aggregated frequency distributions result from the realizations of the target returns (e.g. earnings). Based on the frequency distribution of the earnings we can directly conclude the risk measures, such as capital requirements (RAC) of the organization (Section 6). To avoid over-indebtedness the required capital should be at least as much as to cover the losses, which may occur.

The previously described risk aggregation model is always based on the corporate plan. Below are two (combinable) variations of presented risk assessment models:

  1. 1.

    the direct consideration of the uncertainty related to the various planning items (i.e. characterization of planning items with a distribution, such as a normal distribution); or

  2. 2.

    the separate quantitative description of risk by an appropriate distribution function (e.g. amount of loss and probability of event-driven risks) and the allocation of this risk in a second step to the planning items, where deviations may arise from the plan.

With the “risk factors” approach there is another combined alternative to take risk into account in the context of plan. In addition to the corporate plan, a model of the corporate environment with the variables associated to the organization is constructed (Bartram, 1999). The corporate environment is described by exogenous factors such as exchange rates, interest rates, commodity prices and economic situation (e.g. growth in demand). For all these exogenous factors of the business environment, forecasts are made to create a “plan-environment scenario”. The dependence of the plan variables of the organization on the exogenous factors is captured for example by elasticity. These show how a (uncertain) change in any risk factor impacts the plan variables (e.g. turnover).

There are some important advantages of using a risk factors model. First, it greatly simplifies the correlations (statistical dependence) which are difficult to estimate amongst the uncertain (risk-bearing) variables in the income statement of an organization. For instance, if two different types of uncertain costs, ̃K1 and ̃K2, both (with different elasticity) are dependent on the common (exogenous) risk factors ̃R1 and ̃R2, these two costs are thus correlated. A significant part of the correlations between individual risks or risk-bearing planning items thus implicitly result from the description of the dependence of exogenous risk factors in the business environment, such as economy, exchange rates, and commodity prices.

The development of simulation-based risk aggregation models (e.g. with Excel and Crystal Ball simulation software) is therefore not difficult. The following simple example illustrates the “bandwidth” of the profits of an organization that are determined in the next fiscal year. The starting point is the very simple income statement shown in the following figure. The planned turnover (10 million €) is risky and is described by a normal distribution (standard deviation: 2 million). The variable costs are 50 percent of sales and fixed costs (including interest expense) are described by a triangular distribution: minimum value of 4 million, most likely 4.5 million, and maximum 5.5 million. To aggregate these two risks – revenue and cost – the Monte Carlo simulation is used, which means that the simulation software will for example calculate outcomes of organizations future earnings for the 20,000 possible scenarios. The outcomes are presented in Figure 6.

 Figure 6 Distribution of earnings (in the sample case)

Figure 6 Distribution of earnings (in the sample case)

Two important findings can be observed:

  1. 1.

    The expected profit is on average only €0.33 million and thus lying below the planned profit of €0.5 million, because with the “fixed cost risk” the “threats” overweigh the “opportunities”.

  2. 2.

    The aggregate risk volume can be expressed by the amount of loss, for example with 95 percent confidence level a loss up to 1.4 million will not be exceeded (formally one here speaks of a “value-at-risk”, VaR, see Section 7).

As seen from the example, the output of risk quantification makes it possible to derive meaningful (unbiased) planned values and the range of possible (negative) deviations from plan.

7. Risk measures

7.1 Fundamentals

Should decisions be made under uncertainty (risk), the alternatives must also be evaluated with regard to their riskiness. Risk measures enable us to compare different risks with different characteristics and with different types of distribution and distribution parameters, such as amount of loss (Albrecht and Maurer, 2005).

The traditional risk measures of the capital theory (e.g. CAPM and Markowitz Portfolio) consider the variance or standard deviation (the root of the variance) as volatility measures. That is, they quantify the extent of fluctuations in a risk parameter around the average development (expected value).

Variance or standard deviation are relatively easy to calculate and easy to understand. They consider both the negative and positive deviations from the expected value. Most investors are however more interested in the negative deviations. The so-called downside risk measures are based on this approach. The (valuation-relevant) risk is considered as a possible negative deviation from an expected value and thus the downside risk measures only consider these deviations and include the value-at-risk, the Conditional value-at-risk or lower semi-variance (one-LPM2 risk measure).

Risk measures can be classified in various ways. One of them is according to the position dependence. Position-independent risk measures (such as standard deviation) quantify the risk as the extent of deviation from a target return. Position-dependent risk measures such as value-at-risk are however depending on the expected value. Often, such a risk measure as “required capital” or “required premium” is considered as risk coverage (Section 7.2).

Therefore, these two types of risk measures can be transformed into each other. For example, we apply a position-dependent risk measure to centerd random variable instead of to a random variable (e.g. earnings). This results in a position-independent risk measure. As in the calculation of position-dependent risk measures the expected value is included, these measures can also be interpreted as a kind of risk-adjusted performance measures.

The main advantage of a position-independent risk measure is that the “height information” (expected outcome) and the “risk information” (deviation) are clearly separated, so that the axes are independent from each other in a risk-return portfolio.

Position-dependent risk measures are in contrast corresponding more to the intuitive understanding about risk, since with sufficiently high “expected returns” the possible deviations lose their importance, and since they do not so strongly lead to a possible negative deviation from the target return (e.g. minimum expected return).

7.2 Specific risk measures

The value-at-risk (VaR) as a position-dependent risk measure explicitly investigates the impact of a particularly unfavorable development for the organization. It is defined as the amount of loss not exceeded within a certain period of time (“holding period”, e.g. one year) with a fixed probability p (e.g. from a given target rating). Formally, a value-at-risk is therefore the negative percentile Q of a distribution:

Figure 7 VaR, DvaR, CvaR

The position-independent counterpart of the value-at-risk is the deviation value-at-risk (DVAR, or relative VaR), which is known as value-at-risk of X−E(X) (Figure 7):

The value-at-risk (and the required capital, which can be regarded as VaR in relation to the organization’s earnings) is a risk measure without taking into account the entire information of the probability density. What course the density below the desired quantile (Qp) takes, i.e. in the range of extreme effects (losses), is irrelevant to the capital requirement. Therefore, information that may be very important could be neglected. In contrast to the aforementioned, the shortfall risk measures – particularly the so-called Lower Partial Moments (LPMs) – consider often interesting parts of the probability density from minus infinite up to a given target return (target/upper bound c) for the risk evaluation. The understanding about risk reflects the perception of an institution, and the threats of the shortfall set by it (such as required minimum rate of return). In general, we calculate LPM measure of order m as:

In practice, three specific cases are usually considered, namely, the shortfall probability (probability of default), i.e. m=0, the target shortfall or expected shortfall (m=1), and the target shortfall variance (m=2). In contrast to the variance, at the lower semi-variance only negative deviations from the expected value are included in the calculation.

The aforementioned probability of default p (PD, probability of default), an LPM measure of order 0, indicates the probability that a variable (such as shareholders’ equity) falls below a predetermined threshold value (here, usually zero), and characterizes a rating (Gleißner, 2011a):

The shortfall risk measures can be categorized into conditional and unconditional risk measures.

While unconditional risk measures (such as the expected shortfall or the shortfall probability, SW) ignore the probability of falling below the target, they can flow into the calculation of the conditional shortfall risk measures (such as the Conditional value-at-risk). The Conditional value-at-risk (CVaR) is the expected value of a risky variable that lies below the value-at-risk (VaR1−p). The value-at-risk measures the deviation which is not exceeded within a given planning period with a given probability, in contrast to the Conditional value-at-risk that indicates which impact is to be expected upon the occurrence of this extreme case, i.e. when exceeding the value-at-risk. The Conditional value-at-risk takes into account the probability of a “big” deviation and the value of the deviation.

Overall, it shows that a number of risk measures depend on a given restriction in the form of (e.g. by the creditor) a maximum acceptable probability of default p. The risk level expressed by risk measures like value-at-risk, Conditional value-at-risk, relative value-at-risk (Deviation value-at-risk) is thus dependent on the given rating, which is a specific LPM0-risk measure.

Risk measures with VaR and CVaR can be economically interpreted simply as “risk-related capital requirements”.

8. Consideration of risk information in performance measures

Real estate development in general represents a very complex, dynamic challenge encompassing a variety of practice areas. Methodically adequate evaluation approaches as well as a well-grounded knowledge of the development process, related risk aspects and their assessment are indispensable for efficient risk management. Stochastic calculations enable an additional objectification of risk evaluation. Despite the available evaluation methods, the risk evaluation is primarily conducted based on the subjective assessment of the respective parties.

In this respect, the available data and their specific project-related applicability are of the utmost importance. As a conclusion, it can be stated that a further establishment of risk management concepts as well as supporting instruments on a strategic level would also provide the operational business of a company with a significant potential for optimization.

It is the central concern of a value-oriented management that in the preparation of corporate decisions the expected returns and risks are weighed against each other. The following figure shows this basic idea.

Through the identification, quantitative description, and aggregation of risks of a project, the total risk (on the x-axis) expressed for example in capital requirements (VaR) can be compared with the expected return of the project. This quantification of the risks enables to check whether the project associated with the aggregate total risk can be paid by the organization (maximum risk line, derived from capital and liquidity reserves, i.e. risk-bearing capacity). Hence, a higher risk requires a higher amount of expected profit (or higher returns). That is, the projects should have a favourable risk-return profile to justify the implementation or investment.

 Figure 8 Trade-off between earnings and risk (risk-return trade-off)

Figure 8 Trade-off between earnings and risk (risk-return trade-off)

If we want to position a project or an organization on the risk-return diagram shown in Figure 8 with one ratio, this will lead us directly to the performance measures. A performance measure is obtained by the combination of the expected value of the results (e.g. profit) associated with a risk measure.

A performance measurement can be carried out either ex ante or ex post. An ex ante performance measure is used as a projected success measure of the decision preparation for (or against) an organization’s activity, such as an investment. By doing so the uncertainty of any future forecast (on a target variable X) is explicitly taken into account, which is the fundamental of any economic decision.

Such performance measures are therefore indicators that results from combining (through a function f) the expected outcome E(X) (e.g. expected profit) with a suitable risk measure R(X) such as standard deviation or value-at-risk. The risk measure shows the extent of possible deviations from the plan:

In the simplest case there is a performance measure P(X) for the uncertain profits X where the expected value is reduced by a risk discount, which is directly dependent on the risk measure R(X), e.g. the value-at-risk (of profits) or capital requirements. For example:

The deduction of the risk discount (λ · R(x)) from the expected value is corresponding to the procedure for the determination of so-called “certainty equivalents”, expressed from the perspective of the institution for which secure outcome is equivalent to the uncertain income X. If we choose for example the capital requirements as our risk measure, we can interpret the variable λ and the “price of risk” as imputed (additional) cost of equity. Thus, corresponds to the risk discount just the imputed cost of capital or risk costs.

Enterprise value (capital value/NPV), value added (EVA), and RoRAC or the Sharpe ratio (SR) are the performance standards:

with:

rA = a return on investment. rf = risk-free rate. σ(rA) = standard deviation or return on investment as risk measure.

As an alternative to the Sharpe ratio performance measures are to be considered, where excess return (in relation to the risk-free investment) is considered in relation to LPM risk measures. Such performance ratios are called “Return to Short Fall” (RTS) ratios.

The enterprise value is also interpreted as a performance measure because it is risk-adjusted by discounting the expected future earnings or cash flows. In order to consider the model-based calculated value as performance measure, it is necessary that the discount rate (or cost of capital) is actually derived from the future risks, and not from the historical capital market data (stock returns) within the framework of the so-called Capital Asset Pricing Model. In this approach we can regard the discount rate (cost of capital) as risk-based requirements for the return of a project or organization, i.e. the risk level is the requirement of return on investment (hurdle rate) by calculating the cost of capital. A higher amount of risks leads to potentially higher (negative) deviations from the plan or losses, resulting in a higher “capital requirements” and thus higher capital costs.

RAVA stands for “risk adjusted value added”, a performance measure that can be interpreted as a position-independent risk measure. Unlike today’s conventional performance measures, such as EVA, with this performance measure an adequate and planning-consistent risk assessment is carried out (Gleißner, 2011a):

RAVA therefore reduces the expected profit (expected operating profit E(X) less the risk-free return on capital CE) by a risk discount EKB (risk-adjusted capital, also known as RAC) which is commonly employed as a measure of risk capital requirement:

The application of the performance measure RAVA is simple. In a simple case study for risk aggregation (Section 6) an expected profit of the organization was €0.33 million (after interest expense, rf · CE) and a risk-adjusted capital (required capital) of €1.4 million (value-at-risk under a 5 percent level) was determined.

Assuming (for simplicity) a risk premium of the capital of 10 percent, in the “performance assessment” of the organization the imputed cost of capital is 10 percent × €1.4 million, i.e. €0.14 million to be considered.

RAVA is calculated according to:

9. Conclusion and recommendations

Although being a multi billion industry with high relevance for a multitude of stake-holders, real estate developers appear to have an unstructured and ad hoc approach towards the management of risks, and largely rely on individual judgement and experience (Wiegelmann, 2012a). Real estate developers often base significant investments on back-of-the-envelope calculations (Gimpelevich, 2011). The lack of financing availability in the context of the Global Financial Crisis and the downturn in investment markets have increased exit risks and pricing insecurity for many development schemes. We expect development organisations, who fail to implement a risk management system, will find themselves increasingly penalised by the capital markets and financing partners if not dealing adequately with the identification, assessment and management of risks. The quantification of risks can provide significant economic benefits of a risk-based management, especially in supporting decision-making under uncertainty. The apparent alternative of a non-quantification of risks actually does not exist, because non-quantified risks are hardly just zero quantified risks.

The quantification of risks starts with the quantitative description of the risks by an appropriate probability distribution. As businesses, projects or entire companies are generally subject to a number of risks; these must be aggregated to determine the overall risk. This requires the use of Monte Carlo simulation, in which a large representative sample of risk-bearing possible future scenarios is calculated. Risk-related information therefore adds value to the “traditional” organization or investment planning. The total risk the frequency or probability distributions are derived from the so-called “risk measures” such as standard deviation or value-at-risk. In practice, it is particularly convenient (based on the value-at-risk) to express the overall level of risk through capital requirements, that is, the amount of capital as safeguard against risk. In such a way, a risk-adjusted financing structure of a project or an organization will be determined. Also the rating (to assess the threat to the organization) or risk-adjusted cost of capital (as demands on the return) is easily possible to derive.

The central benefit of quantitative methods in risk management is to enable us to deal with the trade-off between expected returns and risks in the business decision-making process. And since the quality of corporate decisions especially in an uncertain foreseeable future largely determines a real estate development organization’s success, the quantitative methods of risk management are a key success factor of the company.

Werner GleißnerFutureValue Group AG, Germany, and

Thomas WiegelmannFRICS, Bond University, Robina, Australia

References

Albrecht, P. and Maurer, R. (2005), Investment- und Risikomanagement, Schäffer-Poeschel Verlag, Stuttgart

Atherton, E., French, N. and Gabrielli, L. (2008), “Decision theory and real estate development: a note on uncertainty”, Journal of European Real Estate Research, Vol. 1 No. 2, pp. 162–82

Bartram, S.M. (1999), Corporate Risk Management, Uhlenbruch, Bad Soden a. Ts

Byrne, P.J. (1996), Risk, Uncertainty and Decision-Making in Property Development, 2nd ed., E. & F.N. Spon, London

Gimpelevich, D. (2011), “Simulation-based excess return model for real estate development”, Journal of Property Investment & Finance, Vol. 29 No. 2, pp. 115–44

Gleißner, W. (2009), “Metarisiken in der Praxis: Parameter- und Modellrisiken in Risikoquantifizierungsmodellen”, Risiko Manager, No. 20, pp. 14–22

Gleißner, W. (2011a), Grundlagen des Risikomanagements im Unternehmen, 2nd ed., Vahlen, München

Gleißner, W. (2011b), “Wertorientierte Unternehmensführung und risikogerechte Kapitalkosten: Risikoanalyse statt Kapitalmarktdaten als Informationsgrundlage”, Controlling, Vol. 23 No. 3, pp. 165–71

Isaac, D., O’Leary, J. and Daley, M. (2010), Property Development: Appraisal and Finance, 2nd ed., Palgrave Macmillan, Basingstoke

Loizou, P. and French, N. (2012), “Risk and uncertainty in development: a critical evaluation of using the Monte Carlo simulation method as a decision tool in real estate development projects”, Journal of Property Investment & Finance, Vol. 30 No. 2, pp. 198–210

RICS (2003), The Management of Risk – Yours, Mine and Ours, RICS Project Management Faculty, London, available at: www.akc.ie/documents/PMRiskFINAL.pdf (accessed 4 June 2012)

Schulte, K.-W. and Bone-Winkel, S. (2002), “Grundlagen der Projektentwicklung aus immobilienwirtschaftlicher Sicht”, in Schulte, K.-W. and Bone-Winkel, S. (Eds), Handbuch Immobilien-Projektentwicklung, 2nd ed., Immobilien Informationsverlag Rudolf Müller, pp. 27–90

Shun, C.K. (2000), “Review of risk management techniques for property development”, Henley working paper, Henley Management College, London

Sinn, H.W. (1980), Ökonomische Entscheidungen bei Ungewissheit, C.B. Mohr (Paul Siebeck), Tübingen

Wiegelmann, T. (2012a), “Risikoeinschätzung in der Immobilien-Projektentwicklung”, Immobilien & Finanzierung, Vol. 63 No. 8, pp. 261–3

Wiegelmann, T. (2012b), “Risk management in the real estate development industry: investigations into the application of risk management concepts in leading European real estate development organisations”, dissertation (in press)

Zeder, M. (2007), Extreme Value Theory im Risikomanagement, Versus Verlag, Zürich

Further Reading

Hoesli, M., Jani, E. and Bender, A. (2006), “Monte Carlo simulations for real estate valuation”, Journal of Property Investment & Finance, Vol. 24 No. 2, pp. 102–22

Related articles