Options
Class AmericanBasketOption

java.lang.Object
  extended byOptions.AmericanBasketOption

public abstract class AmericanBasketOption
extends java.lang.Object

American option on a basket of assets. Limited functionality. Intended to support some experiments.


Constructor Summary
AmericanBasketOption(Basket assets)
          Constructor, does not initialize the option price path.
 
Method Summary
 Trigger convexTrigger(double alpha, double beta, int nBranch)
          Trigger from convex expansion of the pure continuation region.
abstract  double currentDiscountedPayoff(int t)
          Option payoff discounted to time t=0 and computed from the current discounted price path of the underlying basket.
 double currentDiscountedPayoff(int t, Trigger exercise)
          The discounted option payoff h(rho_t) based on a given exercise strategy rho=(rho_t) computed from the current path of the underlying at time t, that is, the option has not been exercised before time t.
 double[] currentKt(int nPath, Trigger exercise, int t, double m_t)
          result[0]==(U_{t+1}-U_t)^+ in the definition of the random variable K from AmericanOptions.tex (3.15) and result[1]==m_{t+1} from the current path of the underlying computed up to time rho_t.
 double discountedMonteCarloPrice(int t, int nPath, Trigger exercise)
          Monte Carlo price at time t dependent on a given exercise policy computed as a conditional expectation conditioned on information available at time t and computed from a sample of nPath (branches of) the price path of the underlying.
 double discountedMonteCarloPrice(int nPath, Trigger exercise)
          Monte Carlo option price at time t=0.
 RandomVariable discountedPayoff(int s)
          The discounted option payoff as a random variable when exercised at a fixed time s.
 RandomVariable discountedPayoff(Trigger exercise)
          The discounted option payoff based on a given exercise policy as a random variable.
 double g(double x, double alpha, double beta, int t, int T)
          Function applied to Q(int, int) to approximate the true continuation value CV(t).
 double[] get_C()
          Reference to the array C[ ] containing the discounted option price path.
 int get_dim()
          Dimension of underlying asset price vector (excluding the riskfree bond.
 double get_dt()
          Size of time step.
 int get_T()
          Number of time steps to horizon.
 Basket get_underlying()
          Reference to the underlying basket of asset.
 Trigger pureExercise(int nBranch)
          The naive exercise policy which exercises as soon as h(t)>alpha*max{E_t(h(t+1),...,E_t(h(T))}.
 double Q(int t, int nBranch)
          The approximation Q(t)=max{ E_t(h_{t+1}), E_t(h_{t+2}),..., E_t(h_T) } for the continuation value CV(t) computed from the current path.
 double upperBound(int nPath, Trigger exercise)
          This computes the upper bound U_0+E(Sum_{t for the option price V_0.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

AmericanBasketOption

public AmericanBasketOption(Basket assets)

Constructor, does not initialize the option price path. This is left to the concrete subclasses which can decide wether analytic price formulas are available or not.

Parameters:
assets - the underlying basket of asset.
Method Detail

get_T

public int get_T()
Number of time steps to horizon.


get_dt

public double get_dt()
Size of time step.


get_C

public double[] get_C()
Reference to the array C[ ] containing the discounted option price path.


get_underlying

public Basket get_underlying()
Reference to the underlying basket of asset.


get_dim

public int get_dim()
Dimension of underlying asset price vector (excluding the riskfree bond.


currentDiscountedPayoff

public abstract double currentDiscountedPayoff(int t)
Option payoff discounted to time t=0 and computed from the current discounted price path of the underlying basket.

Parameters:
t - time of exercise

discountedPayoff

public RandomVariable discountedPayoff(int s)
The discounted option payoff as a random variable when exercised at a fixed time s.

Parameters:
s - fixed time of exercise.

currentDiscountedPayoff

public double currentDiscountedPayoff(int t,
                                      Trigger exercise)
The discounted option payoff h(rho_t) based on a given exercise strategy rho=(rho_t) computed from the current path of the underlying at time t, that is, the option has not been exercised before time t. This is the quantity h(rho_t) in the terminology of AmericanOption.tex.

Parameters:
t - current time.
exercise - exercise strategy rho=(rho_t).

discountedPayoff

public RandomVariable discountedPayoff(Trigger exercise)
The discounted option payoff based on a given exercise policy as a random variable.

Parameters:
exercise - exercise policy.

Q

public double Q(int t,
                int nBranch)

The approximation Q(t)=max{ E_t(h_{t+1}), E_t(h_{t+2}),..., E_t(h_T) } for the continuation value CV(t) computed from the current path.

Parameters:
t - current time
nBranch - number of path branches per conditional expectation

pureExercise

public Trigger pureExercise(int nBranch)

The naive exercise policy which exercises as soon as h(t)>alpha*max{E_t(h(t+1),...,E_t(h(T))}. See AmericanOption.tex

Current general implementation computes the conditional expectations by Monte Carlo simulation. Reimplement this in subclasses where analytic formulas are available.

Parameters:
nBranch - number of path branches spent on conditional expectations

g

public double g(double x,
                double alpha,
                double beta,
                int t,
                int T)

Function applied to Q(int, int) to approximate the true continuation value CV(t).

Parameters:
x - positive real
alpha - parameter
beta - parameter
t - current time
T - time steps to expiration

convexTrigger

public Trigger convexTrigger(double alpha,
                             double beta,
                             int nBranch)

Trigger from convex expansion of the pure continuation region. Exercises as soon as h_t> b*(Q(t)/b)^a. See AmericanOption.ps.

Parameters:
alpha - parameter.
beta - parameter.

discountedMonteCarloPrice

public double discountedMonteCarloPrice(int t,
                                        int nPath,
                                        Trigger exercise)

Monte Carlo price at time t dependent on a given exercise policy computed as a conditional expectation conditioned on information available at time t and computed from a sample of nPath (branches of) the price path of the underlying.

Parameters:
t - current time (determines information to condition on).
nPath - number of path branches used to compute the option price.
exercise - exercise policy

discountedMonteCarloPrice

public double discountedMonteCarloPrice(int nPath,
                                        Trigger exercise)

Monte Carlo option price at time t=0.

Parameters:
nPath - number of asset price paths used to compute the option price.

currentKt

public double[] currentKt(int nPath,
                          Trigger exercise,
                          int t,
                          double m_t)

result[0]==(U_{t+1}-U_t)^+ in the definition of the random variable K from AmericanOptions.tex (3.15) and result[1]==m_{t+1} from the current path of the underlying computed up to time rho_t.

The function advances the path of the underlying from time t to time t+1. We are only interested in result[0] but need result[1] for feedback in a loop over t to avoid duplication of computations.

Parameters:
exercise - exercise strategy rho_t.
t - current time.
m_t - m_t=h(rho_t) (to avoid duplication in loops).
nPath - number of paths spent on expectation.

upperBound

public double upperBound(int nPath,
                         Trigger exercise)
This computes the upper bound U_0+E(Sum_{t for the option price V_0. Here the process U_t is defined as U_t=max{ h_t,h(rho_t) } where the exercise strategy rho_t is the parameter exercise below. See the file AmericanOptions.tex.

Parameters:
exercise - exercise strategy rho_t.
nPath - number of paths spent on expectation.