Decision making under risk and uncertainty is a fact of life. There are many ways of handling unknowns when making a decision. We will try to enumerate the most common methods used to get information prior to decision making under risk and uncertainty. We'll also look at decision rules used to make the final choice. For our purposes here, we are referring to decisions under the following conditions:

- We are using a quantitative model to determine a performance measure that will be used to make the decision.
- A single performance measure will be used as the basis for the decision.

There are many cases where we don't use quantitative models to make decisions. Instead we use past experience, gut feeling, common sense, and many other ways to arrive at a decision. There are also many decisions where there isn't a single performance measure that goes into a decision. This is especially true in engineering where balancing performance, cost, reliability, etc. involves trade-offs of many factors. The assumptions made above work well where we have a clear cut objective for which a model can be built, such as net present value of an investment.

## Overview

We will look at some of the more common quantitative analysis methods used in decision making under risk and uncertainty.

- Single point estimates
- Scenario analysis
- Break even analysis
- Decision trees
- Monte Carlo simulation

Once we've done the analysis to get information, we can apply decision rules for the actual decision. We'll look at various rules depending on risk or uncertainty.

### Decision Rules Under Uncertainty

- Laplace criterion
- Maximin
- Maximax
- Hurwicz
- Minimax regret

### Decision Rules Under Risk

- Maximum expected value
- Maximum utility
- Most probable outcome
- Composite criteria

## Decision Making Under Risk vs. Decision Making Under Uncertainty

Before we get to far, we need to define the difference between risk and uncertainty, because often no distinction is made between the two. We delved into the differences between risk and uncertainty in our Meaning of Risk article, so we will only briefly cover the difference here. We know outcomes are uncertain, but we may or may not have an idea of the probabilities involved. Blanchard & Fabrycky make a distinction between the two, and I will paraphrase the difference:

* Decision Making Under Uncertainty* We know that outcomes are uncertain, but we don't attempt to assign probabilities to the outcomes.

* Decision Making Under Risk* In our analysis we assign probabilities to uncertain outcomes.

Often uncertainty not only does not have a known probability, but also an unknown outcome. In these cases, we generally are not making decisions on outcomes that we are not aware of, so we will leave that out of our discussion.

## Analysis Methods

For this article we assume that in order to make decisions, we make use of a quantitative model that has uncertain inputs and calculates an output value. This model output might be net present value, market returns, project critical path time, etc. Of course, anyone can make a decision without the use of a model, but that's not our concern here.

In the case of risk, we assign probabilities, and calculate model output for each probability.

In the case of uncertainty, we don't use probabilities because we don't know what they are, so we calculate model output for one or more scenarios.

Very often models are calculated using single point estimates, or for a few scenarios such as best case, worst case, and most likely case. This begs the question, is this decision making under risk or decision making under uncertainty. Since we don't explicitly state probabilities, it leads one to favor decision making under uncertainty. However, when we do single point analysis, or the most likely case we are implicitly using probability by stating the case with the highest probability. This is a minor point, because it doesn't change the results, but it gets murky when trying to differentiate between risk and uncertainty.

### Single Point Estimates

#### Category: Mix of both

The single point estimate uses a single estimate of each unknown to determine our performance measure. This often is similar to using averages to make decisions. The downside to this method is the likelihood of every unknown assuming our estimate is low and it doesn't account for downside risk. It also doesn't tell us the potential for upside opportunity.

When using single point estimates we only get one output value for each alternative. We don't need to employ decision rules as we will discuss later since there is a single performance value for each alternative. If we want to maximize the performance measure, such as net present value (NPV), we simply choose the alternative with the highest NPV.

Single point estimates can be used for both risk and uncertainty. When used for decision making under risk, often adjustment factors are used to account for the risk level present. For example, if net present value is the model output, the discount rate is adjusted depending on the level of perceived risk. Again, this is a gray area between risk and uncertainty since we are not explicitly stating probabilities, but we are implicitly using risk probabilities with the discount rate.

### Scenario Analysis

#### Category: Mix of both

This method takes the single point estimate and goes a few steps beyond. Instead of using a single point estimate for each unknown, the model is calculated many times while changing the input variables. When deciding under uncertainty, usually we are looking at worst case, most likely, and best case scenarios. For decision making under risk, we determine several discrete outcomes from the model and assign a probability to each outcome. The probabilities must add up to 1.

We can display the results in a decision matrix. In the case of decision making under risk, we also show the associated probabilities.

### Break Even Analysis

#### Category: Uncertainty

Break even analysis reverses the modeling process. We start by setting output to our break even point, and solve for the input variables. This method doesn't tell us anything about the probability of risk, or success. It merely gives the decision maker a sense of where they stand relative to downside/upside.

### Sensitivity Analysis

#### Category: Uncertainty

Sensitivity analysis looks at how changing variables affects the output of the model. If output is sensitive to a variable, then a small change in the variable can result in a large change in output. By identifying the variables that drive output, we may be able to make better estimates, limit variation in the variable, monitor and adjust over time, or mitigate the risk to the variable at some cost.

Sensitivity is may be hard to determine clearly. There can be interaction among two or more variables that as they change magnify or minimize change to output. These interactions may not always be easy to identify.

### Decision Trees

#### Category: Risk

Decision Trees are best for projects that involve decisions over time. These result in many possible outcomes. Decision trees are inherently for decision making under risk since we must assign probabilities for each node emanating from a chance node. Decision trees also can incorporate the alternatives into one graphic showing the decisions to be made.

In a decision tree, squares represent decision points. Circles represent uncertainty, hence they are called chance nodes. The outcomes emanating from a chance node are uncertain so we assign probabilities to each outcome. End nodes are final outcomes, and are represented by triangles.

Decision trees are used to determine expected value or expected utility (more on these later). Starting at the end nodes we "roll back" the tree by calculating expected value at each node until we get to the root node. For more on decision trees, refer to the Decision Tree Basics tutorial.

### Monte Carlo Simulation

#### Category: Risk

Monte Carlo simulation is inherently a risk analysis tool since we assign probability distributions to all random variable model inputs. The input variables are randomly sampled and the output is recorded. This is repeated thousands of times resulting in a histogram of output values and their frequency. From this we can determine the range of potential outcomes and the probability of occurrence.

For more on Monte Carlo methods, refer to the Monte Carlo Simulation Basics article.

## Decision Rules Under Uncertainty

The following are some of the common decision rules used to compare alternatives under uncertainty. Since we do not address probabilities, we must rely on outcomes themselves to make decisions.

### Laplace Criterion

The Laplace Criterion could be classified under both risk and uncertainty. It assumes that every outcome has an equal probability of occurrence. Even though we are using probabilities, we are blindly assuming each outcome has an equal chance regardless of actual probabilities. I will follow Blanchard & Fabrycky's convention of including it in decisions under uncertainty.

To evaluate the alternatives we simply average the outcomes for each case. For the decision in Figure 1 we first calculate the average for each alternative.

Alternative 1 = (-100 + 150 + 220)/3 = 90

Alternative 2 = (-200 + 175 + 210)/3 = 61.7

Alternative 3 = (-250 + 160 + 275)/3 = 61.7

Using the Laplace Criterion, we should select alternative 1.

### Maximin Criterion

The maximin criterion selects the alternative with the largest minimum outcome. This is a conservative criterion where we are selecting the alternative with the best worst case outcome. For the decision in Figure 1 we have the following worst case outcomes.

Alternative 1: -100

Alternative 2: -200

Alternative 3: -250

Using Maximin, we should select alternative 1 since it has the best worst case outcome.

### Maximax Criterion

The maximax criterion selects the alternative with the largest maximum outcome. This is an aggressive criterion where we are selecting the alternative with the best, best case outcome. For the decision in Figure 1 we have the following best case outcomes.

Alternative 1: 220

Alternative 2: 210

Alternative 3: 275

Using Maximax, we should select alternative 3 since it has the greatest best case outcome.

### Hurwicz Criterion

The Hurwicz criterion is a compromise between maximin and maximax. In this rule we use a subjective coefficient to strike a balance between maximin and maximax. The coefficient of pessimissm, α, determines our weighting on maximin. Consequently, 1 - α determines our weighting on maximax. To calculate using the Hurwicz criterion we use the following equation.

Hurwicz Criterion = maximin*α + maximax*(1 - α)

*A side note: sometimes alpha is called the coefficient of optimism. In this case, α and 1 - α are reversed.*

### Minimax Regret

With Minimax Regret we are trying to minimize the worst case regret from our decision. To do this we construct a regret table as shown below. The decision matrix from Figure 1 is repeated here with the best outcome for each case shown in the bottom row. To calculate the regret values, we subtract each outcome from that case's best outcome. The last column in the regret table shows the worst regret for each alternative.

Using Minimax Regret, we should choose alternative 1 since it has the smallest worst regret.

## Decision Rules Under Risk

The following are some of the common rules used for decision making under risk.

### Maximize Expected Value

Using expected value maximization means deciding on the option that has the highest expected value. Expected value is the probability weighted average of all outcomes.

Expected Value = ∑(probability*outcome)

When using Monte Carlo simulation, expected value is the mean of all output data points generated.

### Maximize Expected Utility

Using the maximize expected value rule above means we are risk-neutral. In other words, we are indifferent to risk, and we simply want to maximize the expected payoff. Often, we do care about risk and want to avoid painful outcomes. In this case we are risk averse. How much pain we're willing to accept depends on our risk tolerance.

Utility functions are a way to convert payoffs into a utility value. The risk averse utility curve is concave and penalizes low or bad payoffs. The result is we tend to select an alternative that maximizes utility by avoiding riskier alternatives. There is also a condition where we could be risk seeking which results in a convex utility function. If we maximize expected utility in this case, we will tend toward riskier, higher payoff alternatives.

The exponential utility function is a common function used to convert payoff values into utility. By using a risk tolerance constant, R, we can specify our risk tolerance which affects the shape of the utility curve. If R = 0, we are risk neutral and the utility curve is linear as shown in the chart below. If R > 0 we are risk averse. If R < 0 we are risk seeking.

### Most Probable Outcome

Using the most probable outcome criterion we look at the most probable case and choose the most desirable alternative from that case. From the decision in Figure 2, case 2 is the most probable outcome with a probability of 0.55.

The performance measures of this case are:

Alternative 1: 150

Alternative 2: 175

Alternative 3: 160

Therefore, we should choose alternative 2.

### Composite Criteria

We may have a specific set of criteria that should, or must be met. Composite criteria is a combination of criteria that will result in a final selection of the best alternative. For example, we may only tolerate a maximum loss of 100. We would eliminate all alternatives that have a possible loss greater than 100. At this point we may still have two or more viable alternatives so we need a secondary, and maybe a tertiary criterion to reduce the viable alternatives to a final winner. For example, we could choose the winner from the remaining alternatives with the highest best case (maximax).

When using Monte Carlo simulation we have hundreds or thousands of outcomes for each alternative, and we can calculate summary statistics of the simulation data. This allows us to use numerous criteria for selection based on our preferences. We could use the highest 90th percentile value, the highest mean value, lowest variance alternative and so on as decision criteria.

### References

Blanchard, B. S., Fabrycky, W. J. *Systems Engineering and Analysis, 4th Ed*. 2006. Pearson Prentice Hall, Upper Saddle River, NJ.

Park, C. S. *Contemporary Engineering Economics, 4th Ed*. 2007. Pearson Prentice Hall, Upper Saddle River, NJ.

*Regret (decision theory)*. (2018, January 17). *Wikipedia*. Retrieved 05:17, January 31, 2018, from https://en.wikipedia.org/w/index.php?title=Regret_(decision_theory)&oldid=820887038

Hurwicz, M. *The Hurwicz Criterion*. Retrieved January 31, 2019 from http://leonidhurwicz.org/hurwicz-criterion/