

PREV CLASS NEXT CLASS  FRAMES NO FRAMES  
SUMMARY: NESTED  FIELD  CONSTR  METHOD  DETAIL: FIELD  CONSTR  METHOD 
java.lang.Object Options.AmericanOption
American option on a single asset. Limited functionality. Intended to support some experiments.
Constructor Summary  
AmericanOption(Asset asset)
Constructor, does not initialize the option price path. 
Method Summary  
abstract double 
currentDiscountedPayoff(int t)
Option payoff discounted to time t=0 and computed from the current discounted price path of the underlying basket. 
double 
currentDiscountedPayoff(int t,
Trigger exercise)
The discounted option payoff h(rho_t) based on a given
exercise strategy rho=(rho_t) computed from the current path
of the underlying at time t , that is, the option has not been
exercised before time t . 
RandomVariable 
deltaU(int t,
Trigger exercise)
The random variable U_{t+1}U_t conditioned on F_t only, that is, it is assumed that a path of underlying is already computed up to time t . 
double 
discountedMonteCarloPrice(int t,
int nPath,
Trigger exercise)
Monte Carlo price at time t dependent on a given exercise policy computed as a conditional expectation conditioned on information available at time t and computed from a sample of nPath (branches of) the price path of the underlying. 
double 
discountedMonteCarloPrice(int nPath,
Trigger exercise)
Monte Carlo option price at time t=0. 
RandomVariable 
discountedPayoff(int s)
The discounted option payoff as a random variable when exercised at a fixed time s. 
RandomVariable 
discountedPayoff(Trigger exercise)
The discounted option payoff based on a given exercise policy as a random variable. 
double[] 
get_C()
Reference to the array C[ ] containing the discounted option price path. 
double 
get_dt()
Size of time step. 
double[] 
get_S()
Reference to the array C[ ] containing the discounted option price path. 
int 
get_T()
Number of time steps to horizon. 
Asset 
get_underlying()
Reference to the underlying asset. 
RandomVariable 
Kt(int nPath,
Trigger exercise,
int t)
The random variable (E_t[U_{t+1}U_t])^+ in the definition
of the random variable K from AmericanOptions.tex (3.15). 
Trigger 
pureExercise(int nBranch)
The naive exercise policy rho=(rho_t) . 
double 
Q(int t,
int nBranch)
The approximation Q(t)=max{ E_t(h_{t+1}), E_t(h_{t+2}),..., E_t(h_T) }
for the continuation value CV(t) computed from the current
path. 
double 
upperBound(int nPath,
Trigger exercise)
This computes the upper bound U_0+Sum_{t 
Methods inherited from class java.lang.Object 
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait 
Constructor Detail 
public AmericanOption(Asset asset)
Constructor, does not initialize the option price path. This is left to the concrete subclasses which can decide wether analytic price formulas are available or not.
asset
 the underlying asset.Method Detail 
public int get_T()
public double get_dt()
public double[] get_C()
public double[] get_S()
public Asset get_underlying()
public abstract double currentDiscountedPayoff(int t)
t
 time of exercisepublic RandomVariable discountedPayoff(int s)
s
 fixed time of exercise.public double currentDiscountedPayoff(int t, Trigger exercise)
h(rho_t)
based on a given
exercise strategy rho=(rho_t)
computed from the current path
of the underlying at time t
, that is, the option has not been
exercised before time t
. The current path is given up to time
t
. From there the method computes a new path forward until
the time of exercise.
This is the quantity
h(rho_t)
in the terminology of AmericanOption.tex.
t
 current time.exercise
 exercise strategy rho=(rho_t)
.public RandomVariable discountedPayoff(Trigger exercise)
exercise
 exercise policy.public double Q(int t, int nBranch)
The approximation
Q(t)=max{ E_t(h_{t+1}), E_t(h_{t+2}),..., E_t(h_T) }
for the continuation value CV(t)
computed from the current
path.
t
 current timenBranch
 number of path branches per conditional expectationpublic Trigger pureExercise(int nBranch)
The naive exercise policy rho=(rho_t)
.
See AmericanOption.tex. Given that the option has not been exercised
before time t
, the stopping time rho_t
exercises
at the first time t>=s
where
h_s>alpha*max{E_t(h(t+1),...,E_t(h(T))}
.
Here alpha>=1
is a parameter to be parametrized to a value
close to one.
Current general implementation computes the conditional expectations by Monte Carlo simulation. Reimplement this in subclasses where analytic formulas are available.
nBranch
 number of path branches spent on conditional expectationspublic double discountedMonteCarloPrice(int t, int nPath, Trigger exercise)
Monte Carlo price at time t dependent on a given exercise policy computed as a conditional expectation conditioned on information available at time t and computed from a sample of nPath (branches of) the price path of the underlying.
t
 current time (determines information to condition on).nPath
 number of path branches used to compute the option price.exercise
 exercise policypublic double discountedMonteCarloPrice(int nPath, Trigger exercise)
Monte Carlo option price at time t=0.
nPath
 number of asset price paths used to compute the option price.public RandomVariable deltaU(int t, Trigger exercise)
The random variable U_{t+1}U_t conditioned on F_t only, that is,
it is assumed that a path of underlying is already computed up to time
t
. See AmericanOption.tex
It is crucial that the trigger exercise
provides an
intelligent implementation of the mthod nextTime
which moves
the path of the underlying forward and does
not use the default implementation in Triggers.Trigger
.
t
 current timeexercise
 the exercise triggerpublic RandomVariable Kt(int nPath, Trigger exercise, int t)
The random variable (E_t[U_{t+1}U_t])^+
in the definition
of the random variable K
from AmericanOptions.tex (3.15).
conditioning ignored.
exercise
 exercise strategy rho_t
.t
 current time.nPath
 number of paths spent on expectation.public double upperBound(int nPath, Trigger exercise)
U_0+Sum_{t where
K_t=(E_t[U_{t+1}U_t])^+)
for the option price
V_0
. Here the process U_t
is defined as
U_t=max{ h_t,h(rho_t) }
where the exercise strategy
rho_t
is the parameter exercise
below.
See the file AmericanOptions.tex.
 Parameters:
exercise
 exercise strategy rho_t
.nPath
 number of paths spent on expectation.
Overview
Package
Class
Tree
Deprecated
Index
Help
PREV CLASS
NEXT CLASS
FRAMES
NO FRAMES
SUMMARY: NESTED  FIELD  CONSTR  METHOD
DETAIL: FIELD  CONSTR  METHOD