

PREV CLASS NEXT CLASS  FRAMES NO FRAMES  
SUMMARY: NESTED  FIELD  CONSTR  METHOD  DETAIL: FIELD  CONSTR  METHOD 
java.lang.Object Options.AmericanBasketOption
American option on a basket of assets. Limited functionality. Intended to support some experiments.
Constructor Summary  
AmericanBasketOption(Basket assets)
Constructor, does not initialize the option price path. 
Method Summary  
Trigger 
convexTrigger(double alpha,
double beta,
int nBranch)
Trigger from convex expansion of the pure continuation region. 
abstract double 
currentDiscountedPayoff(int t)
Option payoff discounted to time t=0 and computed from the current discounted price path of the underlying basket. 
double 
currentDiscountedPayoff(int t,
Trigger exercise)
The discounted option payoff h(rho_t) based on a given
exercise strategy rho=(rho_t) computed from the current path
of the underlying at time t , that is, the option has not been
exercised before time t . 
double[] 
currentKt(int nPath,
Trigger exercise,
int t,
double m_t)
result[0]==(U_{t+1}U_t)^+ in the definition of
the random variable K from AmericanOptions.tex (3.15)
and result[1]==m_{t+1}
from the current path of the underlying computed up to time
rho_t . 
double 
discountedMonteCarloPrice(int t,
int nPath,
Trigger exercise)
Monte Carlo price at time t dependent on a given exercise policy computed as a conditional expectation conditioned on information available at time t and computed from a sample of nPath (branches of) the price path of the underlying. 
double 
discountedMonteCarloPrice(int nPath,
Trigger exercise)
Monte Carlo option price at time t=0. 
RandomVariable 
discountedPayoff(int s)
The discounted option payoff as a random variable when exercised at a fixed time s. 
RandomVariable 
discountedPayoff(Trigger exercise)
The discounted option payoff based on a given exercise policy as a random variable. 
double 
g(double x,
double alpha,
double beta,
int t,
int T)
Function applied to Q(int, int) to approximate the true
continuation value CV(t) . 
double[] 
get_C()
Reference to the array C[ ] containing the discounted option price path. 
int 
get_dim()
Dimension of underlying asset price vector (excluding the riskfree bond. 
double 
get_dt()
Size of time step. 
int 
get_T()
Number of time steps to horizon. 
Basket 
get_underlying()
Reference to the underlying basket of asset. 
Trigger 
pureExercise(int nBranch)
The naive exercise policy which exercises as soon as h(t)>alpha*max{E_t(h(t+1),...,E_t(h(T))} . 
double 
Q(int t,
int nBranch)
The approximation Q(t)=max{ E_t(h_{t+1}), E_t(h_{t+2}),..., E_t(h_T) }
for the continuation value CV(t) computed from the current
path. 
double 
upperBound(int nPath,
Trigger exercise)
This computes the upper bound U_0+E(Sum_{t 
Methods inherited from class java.lang.Object 
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait 
Constructor Detail 
public AmericanBasketOption(Basket assets)
Constructor, does not initialize the option price path. This is left to the concrete subclasses which can decide wether analytic price formulas are available or not.
assets
 the underlying basket of asset.Method Detail 
public int get_T()
public double get_dt()
public double[] get_C()
public Basket get_underlying()
public int get_dim()
public abstract double currentDiscountedPayoff(int t)
t
 time of exercisepublic RandomVariable discountedPayoff(int s)
s
 fixed time of exercise.public double currentDiscountedPayoff(int t, Trigger exercise)
h(rho_t)
based on a given
exercise strategy rho=(rho_t)
computed from the current path
of the underlying at time t
, that is, the option has not been
exercised before time t
. This is the quantity
h(rho_t)
in the terminology of AmericanOption.tex.
t
 current time.exercise
 exercise strategy rho=(rho_t)
.public RandomVariable discountedPayoff(Trigger exercise)
exercise
 exercise policy.public double Q(int t, int nBranch)
The approximation
Q(t)=max{ E_t(h_{t+1}), E_t(h_{t+2}),..., E_t(h_T) }
for the continuation value CV(t)
computed from the current
path.
t
 current timenBranch
 number of path branches per conditional expectationpublic Trigger pureExercise(int nBranch)
The naive exercise policy which exercises as soon as
h(t)>alpha*max{E_t(h(t+1),...,E_t(h(T))}
.
See AmericanOption.tex
Current general implementation computes the conditional expectations by Monte Carlo simulation. Reimplement this in subclasses where analytic formulas are available.
nBranch
 number of path branches spent on conditional expectationspublic double g(double x, double alpha, double beta, int t, int T)
Function applied to Q(int, int)
to approximate the true
continuation value CV(t)
.
x
 positive realalpha
 parameterbeta
 parametert
 current timeT
 time steps to expirationpublic Trigger convexTrigger(double alpha, double beta, int nBranch)
Trigger from convex expansion of the pure continuation region.
Exercises as soon as h_t> b*(Q(t)/b)^a
.
See AmericanOption.ps
.
alpha
 parameter.beta
 parameter.public double discountedMonteCarloPrice(int t, int nPath, Trigger exercise)
Monte Carlo price at time t dependent on a given exercise policy computed as a conditional expectation conditioned on information available at time t and computed from a sample of nPath (branches of) the price path of the underlying.
t
 current time (determines information to condition on).nPath
 number of path branches used to compute the option price.exercise
 exercise policypublic double discountedMonteCarloPrice(int nPath, Trigger exercise)
Monte Carlo option price at time t=0.
nPath
 number of asset price paths used to compute the option price.public double[] currentKt(int nPath, Trigger exercise, int t, double m_t)
result[0]==(U_{t+1}U_t)^+
in the definition of
the random variable K
from AmericanOptions.tex (3.15)
and result[1]==m_{t+1}
from the current path of the underlying computed up to time
rho_t
.
The function advances the path of the underlying from
time t
to time t+1
. We are only interested in
result[0]
but need
result[1]
for feedback in
a loop over
t
to avoid duplication of computations.
 Parameters:
exercise
 exercise strategy rho_t
.t
 current time.m_t
 m_t=h(rho_t)
(to avoid duplication in loops).nPath
 number of paths spent on expectation.
upperBound
public double upperBound(int nPath,
Trigger exercise)
 This computes the upper bound
U_0+E(Sum_{t for the option price
V_0
. Here the process U_t
is defined as
U_t=max{ h_t,h(rho_t) }
where the exercise strategy
rho_t
is the parameter exercise
below.
See the file AmericanOptions.tex.
 Parameters:
exercise
 exercise strategy rho_t
.nPath
 number of paths spent on expectation.
Overview
Package
Class
Tree
Deprecated
Index
Help
PREV CLASS
NEXT CLASS
FRAMES
NO FRAMES
SUMMARY: NESTED  FIELD  CONSTR  METHOD
DETAIL: FIELD  CONSTR  METHOD