You are on page 1of 39

Portfolio Choice with State-dependent

Adjustments
Analyzing leveraged positions without parametric assumptions
Peter Farkas
1
Central European University
Friday, April 25, 2014
1
Corresponding author, Nador u. 9, Budapest 1051, Hungary; farkas-
peter@ceu.budapest.edu.
Abstract
In this paper, we discuss a new method for solving the portfolio choice
problem which models state-dependent quantity adjustment using bound-
ary crossing events. This new method has several advantages: rst, we can
solve for the optimal portfolio weights without parametric assumptions, by
deriving them directly from the data. Next, we can describe the full distri-
bution of the portfolios value, not just its moments. Finally, we can easily
deal with important practical issues, such as transaction costs, leveraged po-
sitions and no-ruin conditions, or the cost of margin nancing. In particular,
the method allows us to analyze leveraged positions in discrete time under
zero ruin probability. Analyzing historical stock data suggests that histori-
cally, the log-optimal portfolio was a not too extensive leveraged purchase of
a diversied stock portfolio, therefore leveraging does not necessarily imply
risk-seeking behavior. We also show that depending on how much weight we
allocate to this diversied stock portfolio, the downside risk measured as 5%
VAR of the portfolios value may be decreasing or increasing over time. Con-
sequently, an objective functions which incorporates the VAR of the portfo-
lios value or operate with VAR constrains result in horizon-dependent port-
folio weights. We also present some evidence suggesting that the log-optimal
portfolio weights are time-dependent. Finally, irrespective of what weight we
chose for the diversied stock portfolio, it is log-optimal to reduce exposure
to the stock market if the predicted volatility is high, and increase it in low
volatility periods.
JEL: G11, G17, C14
Keywords: optimal portfolio choice, log optimality, GOP, transaction
costs, leveraged positions, horizon dependence, VAR constrains, non-
parametric, rst exit time, boundary crossing counting, backtesting
1 Introduction
The optimal portfolio choice is an equally important problem for theoreti-
cians, for nancial practitioners as well as for any non-professional with an
investment decision to make. According the Brandts (2009) review, there
is a renewed theoretical interest for this problem which is motivated by the
fact that relatively recent empirical ndings (predictability, conditional het-
eroscedasticity) have not yet been fully incorporated into the theory of port-
folio allocation. This review also states that the main direction of academic
research is to identify key aspects of the real-world portfolio problem and
to understand how these aspects inuence the decision of institutions and
individuals. Hopefully, such eorts will reduce the gap between the theory
and the practice of portfolio management.
In this paper, we aim to follow this general academic direction: we hope to
provide nancial theory with some new results and at the same time we also
aim to provide practitioners with a useful tool. The essence of our innovation
is a new, state-dependent quantity adjustment mechanism which is based on
boundary crossing counting processes introduced by Farkas (2013). Plainly
speaking, we propose to change the number of quantities each time the ap-
propriately adjusted, weighted price index changes more than a predened
limit and not with a constant frequency, not once a day. We show how to
calculate the portfolios return by counting the number of adjustments. The
biggest advantage of this approach is that it allows us to describe the portfo-
lios full return, not just its expected value without parametric assumptions
or simulations. Consequently, we can solve for the portfolio choice problem
under VAR constrains. Also, our approach makes it straightforward to deal
with certain practical issues, such as proportional transaction costs, or the
cost of margin nancing upon leveraged positions and nally the issues re-
lated with no-ruin conditions. From a applied theoretical point of view, our
paper describes a direct, non-parametric estimation method. From a practi-
tioners point of view, we introduce a back-testing technique which is useful
for robust, real-world analysis of historical data.
The methodology in this paper diers from the typical agenda on portfolio
1
choice. Usually, solving for the optimal portfolio consists of three steps. The
rst step is to specify and estimate a parametric model describing the returns
of the risky assets, a step we want to avoid as it has been proven notoriously
dicult to come up with an accurate parametric model. Classical papers
such as Markowitz (1952), Merton (1969, 1971), Samuelson (1969), Malkiel
and Fama (1970), often assume that prices follow a Geometric Brownian
Motion and abstract away from nancial frictions. Unfortunately, the GBM
hypothesis is often rejected in practice as discussed by Lo and MacKinlay,
(1999), by Cont, (2001) or by Cambell and Thomson, (2008). Additionally,
even if we assume a parametric model, we have to face with the fact that we
do not know the true parameters, analyzed by the literature on parameter
uncertainty as shown by Jobson and Korkie (1980), Best and Grauer (1991),
Chopra and Ziemba (1993). The next step is to solve the investors optimal
problem during which researchers often abstract away from frictions. This
is done simply to reduce the complexity of the problem, however nancial
frictions proved to be important from a theoretical and a practical point of
view as well as discussed by Constantinides, (1986) or by Dumas and Luciano
(1991), and nally by Balduzzi and Lynch (1999). The last step is to plug-
in the potentially biased parameter estimates to a potentially oversimplied
portfolio choice solution and infer the optimal portfolio weights from there.
An alternative path is to make use of the large amount of available data,
and try to obtain optimal portfolio weights directly, without parametric as-
sumptions. This approach is called alternative econometric approach in
Brands (2009) review, while practitioners often call it backtesting. This is
basically a direct attack on the problem. Here, where we try to obtain portfo-
lio weights directly from the data. Besides the obvious benets of not having
to build a parametric model for the returns, an other important advantage
is the reduction in dimensionality and hence gain in degree of freedoms. An
important, but sometimes overlooked issue summarized below is related with
the fact that constant portfolio weights typically
1
require us to occasionally
change the number of securities in the portfolio. The fundamental problem
is as follows: investors, big or small, are typically interested in controlling
1
Unless we want to hold 100% of our wealth in one asset only
2
for the weights of the risky assets while stock- and commodity exchanges do
not oer this option: only the quantity of risky securities can be controlled
for. Therefore, controlling for the weights require occasionally changing the
number of securities in the portfolio.
max
W
S(U(V
T
)) =
_
_
_
max
W
S(U(V
T
(W, RF, RR))) continuous adjustment
max
W
S(U(V
T
(Q, P, RR, tr)))) otherwise
(1)
where S(.) is a stochastic function, typically some combination of the vari-
ables expected value, variance, or some extreme value statistics, such as VAR
limits. In both cases, U(.) is a non-path dependent utility function having
a well-dened maximum
2
, and utility depends only on the terminal value,
V
T
. Note, that expanding this formulation to other, path-dependent forms is
possible yet not detailed here due to space constrains. Also, W is the weight
of the risky assets, Q is the number of securities hold, RF and P describes
the return of the risky assets, RR describes the return on the composite asset
and nally tr is the proportional transaction cost. The literature has oered
three solutions to this problem.
1. Continuous adjustment started by Merton (1969,1971) simply abstracts
away from this problem by saying that lets assume we can change the
number of securities continuously and hence we can control for the
variable which drives the portfolios value, V
t
.
2. Time-dependent adjustment proposes to change quantities at a given
time-frequency. For example, assuming daily quantity adjustment is a
typically time-dependent mechanism frequently used in practice.
3. State-dependent adjustment proposes to calculate quantities based on
some state variable, for example assuming that quantities are changed
each time the appropriately weighted cumulative price change since
the last adjustment reaches some pre-dened critical level. This type of
2
Not all utility function has a well-dened maximum, linear utility functions typically
tend to be problematic in this regard.
3
adjustments often used in the literature on optimal inattention which
originate from Baumol (1952) and Tobin (1956) and later on, was taken
up to deal with transaction costs by Constantinides, (1986) or Dumas
and Luciano (1991).
The main innovation of this paper is to introduce a new way to model state-
dependent adjustments which has several advantages. Firstly, from a theo-
retical point of view, we show a new analytical, although not exact, solution
to the simple portfolio choice problem under log-utility. Similarly to Con-
stantinides (1986) or Dumas and Luciano (1991), we can allow for propor-
tional transaction costs. Next, we can analyze leveraged positions as well as
short-sellings without truncating the underlying distribution, because state-
dependent adjustment prevents ruin events to occur. This is especially im-
portant for todays economy since economic conditions lead to record-high
level in margin loans in the United States.
Figure 1: End of month gures based on New York Stock Exchange Factbook
Moreover, time-dependent adjustment may overestimate transaction costs
in case the adjustment is too frequent. Finally, the proposed state-dependent
adjustment describes the portfolios value using a discrete stochastic variable
which facilitates dealing with stochastic issues greatly. In particular, we can
4
calculate not only the optimal weight and the corresponding expected re-
turns, but the full distribution of these returns as well, which is useful when
solving VAR-constrained problems. Overall, our paper is a useful complement
to other, non-parametric studies relying on time-dependent adjustment such
as Brandt (1999) or Brandt (2003), or Covers Universal Portfolio approach,
detailed by Cover (1991), which has been extended, for example, by Blum
and Kalai, (1999) to be able to incorporate transaction costs, and further
explored by Gyor and Vajda (2008) and by Horvath and Urban (2011).
The paper is structured as follows. The second section rst explains how
to use boundary crossing counting processes (BCC processes) to analyze
portfolio choice problems. We continue by briey discussing the theory behind
these stochastic processes and explain how they are related with the rst
exit time distributions and the upper boundary crossing probabilities. As a
conclusion for this section, we put our method into perspective by solving
for the simple portfolio choice under state-dependent adjustment and by
comparing the results with Mertons continuous solution under Geometric
Brownian Motion and log-utility. In the third section, we apply our method
to actual security data and the last section concludes.
2 Portfolio Choice and Quantity Adjustment
2.1 State-dependent adjustment
The following readjustment scheme is similar in spirit to the one rec-
ommended by Dumas and Luciano (1991) for portfolio allocation or by
Martellini, L. and Priaulet (2002) for option pricing. Here, we basically as-
sume that the number of securities hold are changed once the cumulative,
nancing-adjusted weighted price change since the last adjustment reaches
some exogenously chosen, pre-dened levels. The rst process of Figure 2
called restarted process shows the cumulative price changes between two
readjustments. The second process Y UL
t
, rst introduced by Farkas (2013),
called boundary-crossing counting process or BCC process, as its name
5
suggests, simply counts the number of boundary-crossing events.
Figure 2: Boundary Crossing Counting Process
We dierentiate between four dierent types of BCC processes.
1. Y U
t
counts the number of upper crossing events: Y U
t
= Y U
t
+ 1 if
X
t
= UB
t
and X
t+
= X
0
.
2. Y L
t
counts the number of lower crossing events: Y L
t
= Y L
t
+ 1 if
X
t
= LB
t
and X
t+
= X
0
.
3. Y UL
t
= Y U
t
+ Y L
t
counts the number of upper and lower crossing
events.
4. Y D
t
= Y U
t
Y L
t
described the dierence between the upper and the
lower crossing events.
For appropriately chosen X
t
, we can describe the portfolios value in time
t T
UL
, where T
UL
is the set of boundary crossing moments, as follows:
V
t
= V
0
GU
Y U
t
GL
Y L
t
(2)
The stochastic elements are Y U
t
and Y L
t
describing the number of boundary
crossing events while GU > 1 and 0 < GL < 1 are exogenous constants
6
describing the change in wealth upon boundary crossings. It is important to
highlight that stochastic variables are discrete. Assuming away from potential
liquidity constrains and price-discontinuities, no-ruin conditions only require
nite weights for the risky assets as worst case V
t
= V
0
GL
k
> 0 for any
k N. Without loss of generality, we can normalize
3
the initial portfolios
value to one. The log of wealth is than equal to:
log(V
t
) = Y U
t
log(GU) +Y L
t
log(GL) (3)
Since we chose GU and GL exogenously, we can set them in a way that
GU = 1/GL and equation further simplies as:
log(V
t
) = Y D
t
log(GU) (4)
As detailed in the appendix, the following restarted process restricts the
change in the portfolios value two two discrete values.
X
t
=
(1 +

j
w
j
p
j

j
w
j

j
P
j
tr
j
+dbs(t)
1

j
w

j
tr
j
(5)
where j is the number of assets in the portfolio, w
j
and w

j
are the portfolio
weights, P
j
are the prices at the last boundary crossings, or the initial prices,
P

j
are the actual prices, tr is the proportional transaction costs and nally
dbs
t
is the cost of nancing expressed as percentage of portfolios value. This
structure ensures that the percentage change in the portfolios value takes
only two discrete values by balancing out the change due to the variation
in risky assets value and the change due to nancing. Theoretically, we can
chose from an innite number of potential boundaries. Here, we will not
analyze this choice in detail, but try to place them approximately seven
standard deviation distance, which has some desirable statistical properties
3
It would also make sense to normalize the initial values to V
0
=

|w
i
| (1 tr),
which would then take into account the cost of entry. In this paper, we aim to analyze
annualized returns over long horizons, therefore we abstract away from these initial costs.
7
as detailed in Farkas (2013).
Finally, the stochastic issues are straight-forward since BCC-distributions
are discrete. Assuming logarithmic utility as an illustration, calculating the
expected value for example can be done as:
E(log(V
t
)) = log(GU) E(Y D
t
) = log(GU)
i

i
p(Y D
t
= i) i; (6)
Note, that the expected value of Y D can be well approximated by E(Y D
t
)
Y D
count
t
, where Y D
count
t
indicates the number of events observed in the data.
This approximation is considerably faster and, based on simulations, rela-
tively precise if we observe more then 30 crossing events. Calculating other
stochastic measures are also straightforward, for example calculating a 5%
VAR value can be done as:
V aR(log(V
t
), 5%) = log(GU)
V aR5i

i
p(Y D
t
= i) i (7)
where

V ar5i
i
p(Y D
t
= i) = 0.05. Overall, we have shown that if we can cal-
culate the boundary crossing counting distribution, then we can also charac-
terize the full distribution of the portfolios value as well as many stochastic
properties derived from the full distribution, such as the expected values or
VAR limits.
2.2 Financing costs
The formula for the restarted process includes the amount of interest collected
or paid on the composite asset, which will be discussed next. Financing costs
are assumed to be a piecewise linear function of the portfolio weights.
dbs
t
=

t
i=T
ul
DBS(i) RF(i)
V
t1
(8)
8
In particular, deposit (D) is collected on the fraction of wealth that is not
invested in risky assets, if any.
D =
_
_
_
1

W(w > 0) if

W(w > 0) < 1


0 otherwise
(9)
Borrowing (B) occurs if we decide to nance the purchase of risky assets by
margin loans.
B =
_
_
_
1

W(w > 0) if

W(w > 0) > 1


0 otherwise
(10)
Finally in case we want to short-sell (S) some risky assets, then we have to
borrow which also assumed to be costly.
S =
_
_
_

W(w < 0) if

W(w < 0) < 0


0 otherwise
(11)
where W(w > 0) and W(w < 0) is used to indicate the positive/negative
elements of the weight matrix.
We can adopt two dierent approaches regarding RF = [R
D
R
B
R
S
]. On
one hand, we can assume that it is constant, and analyze how its value
inuences portfolio allocation. Also, we can substitute it with some reference
rate, such as one week LIBOR rate or the FED target rate. In this paper,
we chose this latter approach as it captures potential dependencies between
the cost of nancing and the performance of the risky assets. The interest
paid on the deposits, R
D
is likely to be lower than the reference rate, while
interest collected on the margin loans as well as on the loans for short-selling
are typically higher. In this paper, we will rely on an educated guess based
on Fortune (2000, 2001) papers and assume that deposits are 10 bps lower
than the reference rate but at least zero, borrowing costs 100 bps above the
long-term reference rate and 200 bps above short-term reference rate, while
9
borrowing for short-selling costs 200 bps above
4
the reference rate. Naturally,
the actual rate on margin loans depends on the brokerage rms and likely
to vary by clients even within one rm, and investigating these contracts are
well beyond the scope of this paper.
2.3 Theory and estimation of BCC distributions
The number of boundary crossing events can be calculated directly, simply by
counting the number of crossing events. Also, it can be calculated recursively
using rst exit time distribution
5
and upper boundary crossing probabilities.
These concepts will be reviewed next. Note, that we will only provide the
denitions along with a brief discussion, and we also point to references where
interested reader may nd the technical details.
Denition 1 Let rst exit time TUL be dened as the time in which the
process X
t
crosses either boundaries:
TUL =
_

_
inf(t : X
t
/ (LB, UB) if t is nite
otherwise
(12)
Throughout the paper, we are going to assume that TUL is positive and
nite, which are non-elementary assumptions. The niteness of the rst-
exit time is a well-known property for martingales, which is typically proven
4
We have assumed higher costs for short-selling as from the perspective of the brokerage
rm, shortselling is more risky than a leveraged purchase as the loss in case of leveraged
purchase is limited, however the loss in case of shortselling is unlimited
5
There is no consensus in the literature on the terminology. First passage time or
hitting time is typically used in situations, where there is only one boundary. Expected
rst passage time describes the expected amount of time needed to reach that boundary.
First passage time distribution aims to characterize the full distribution. The case of
two boundaries is usually referred to as rst exit time or double-barrier hitting time
although the term rst exit time is also used to describe rst passage time, see for
example Wilmott (1998, p. 144). Exit times should not be confused with rst range
time, as range is generally used to describe the dierence between the maximum and the
minimum value. In this paper, we follow the terminology of Borodin and Salminen (2002).
They use the name rst exit time to describe the case of double boundaries, so we stick
to this notation as well.
10
using Doobs lemma (optimal sampling theorem) as discussed for example
by Medvegyev, (2007). As for non-martingales, niteness is proven by rst
converting the stochastic process to a martingales, as explained for example
in Karlin and Taylor, (1998) and then apply Doobs lemma. The assumption
that TUL > 0 is problematic only if the limit of the boundaries at the
starting points are equal to the starting value of the restarted process, a case
which we will avoid in this paper.
Denition 2 Let rst exit time distribution fet(t, UB, LB, X
0
) be dened
as a probability distribution describing the probability that the rst exit time
is t.
We assume that the rst exit time distribution is stationary. The distribu-
tions can either be calculated analytically for certain parametric processes,
or estimated from data using kernel density estimation. The typical proce-
dure for the analytical work begins by subtracting the expected value from
the original stochastic process which results a martingale. Next, we make use
of the Doobs lemma and equate the initial value of this martingale with its
expected value at the rst exit time. Finally, by rearranging this expected
value, we can obtain the Laplace transforms of the rst exit times. The prob-
ability distribution functions can then be derived by inverting these Laplace
transforms. As for the non-parametric case, when estimating the rst exit
time distribution, it is important to take into account not only the closing,
but the minimum and maximum values as well, otherwise we induce sampling
bias. As for the type of kernel, simulations suggest to use the one based on
normal distributions.
Denition 3 Let upper boundary crossing probability be dened as the prob-
ability that the stochastic process reaches the upper boundary before hitting
the lower one.
p = P(X
TUL
= UB
TUL
|UB
t
> X
t
> LB
t
); 0 < t < TUL; (13)
11
Also, let us introduce the notation q for lower-boundary crossing probability,
that is q = 1 p. We assume that upper boundary crossing probabilities
are constant. For analytical processes, the boundary-crossing probability is
typically expressed using scale functions, as explained for example in Karlin
and Taylor (1981).
P(X
TUL
= UB) =
S(X
0
) S(lb)
S(ub) S(lb)
(14)
where S(x) = exp(
_
x 2(y)

2
(y)
dy) is the scale function and mu(.) and
2
(.) are
the innitesimal moments. The lower limit of the integrals does not play a
signicant role thus is omitted in accord with the literature. This equations
essentially shows that once the process has been appropriately scaled, then
the probability of upper (or lower) boundary crossing depends only on the
initial points relative distance from the lower and upper boundaries. For non-
parametric processes, the boundary crossing probability can be estimated by
the ratio of the number of upper crossings and the number of total crossings.
We characterize the upper and the lower boundary crossing counting dis-
tribution with the following matrix:
PUL =
_
_
_
_
_
_
PUL
1
(0) PUL
2
(0) PUL
T
(0)
PUL
1
(1) PUL
2
(1) PUL
T
(1)
.
.
.
.
.
.
.
.
.
.
.
.
PUL
1
(n) PUL
2
(n) PUL
T
(n)
_
_
_
_
_
_
(15)
where some PUL
t
(i) describes the probability that until period t, exactly i
boundary-crossing events have occurred, that is p(Y UL
t
= i) = PUL
t
(i).
Calculating the rst row simply involves evaluating the rst exit time distri-
bution at time t:
PUL
t
(0) =
_
_
_
1
_
t
0
fet(t)dt for continuous distributions
1

t
k=1
fet(k) for discrete distributions
(16)
12
Thus, we have been able to obtain the rst column of the PUL matrix. Any
other column can be calculated recursively using:
PUL
t
(j) = F
2
(j) F
1
(17)
where j indicates the number of columns. Now F
1
and F
2
(j) matrix both can
be calculated recursively using rst-exit time distributions as shown in Farkas
(2013), therefore rst-exit time distribution fully characterizes the boundary-
crossing distribution for the case of lower and upper crossing both. For our
purpose, we also need Y D
t
describing the dierence between the number of
upper and the lower crossing events which can be obtained using the following
random-time binomial tree. We named it random time tree, because the time
needed to move from one node to the next is random.
Figure 3: Random-time binomial tree
In comparison to classical binomial trees where stochastic variable may
either go up or down, here we allow for three options: the variable may either
go up, down, or remain in that particular node. Such random-time binomial
tree could also be represented by a classical trinomial tree. A node of the
tree B(i, j) can be described by the number of boundary crossing events, i
is the number of upper crossings, j is the number of lower crossings. Note,
that the grid itself also changes dynamically as time changes, the diagram is
13
essentially a snapshot taken at a given point of time.
Characterizing the grid can be done in two steps. The vertical location
can simply be described by the number of boundary crossings.
BV =
_
PUL
T
(0) PUL
T
(1) PUL
T
(T)
_
(18)
The horizonal location can simply be described by the boundary crossing
probabilities conditioned on the number of boundary crossings. For j number
of boundary crossing:
BH(j) =
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
0
.
.
.
p
j
p
j1
q
.
.
.
q
j
.
.
.
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
(19)
Since the horizonal and the vertical location is independent, the grid can be
characterized as:
B = BV BH (20)
Obtaining the distribution from B simply involved collecting terms where
i j are equal.
p(Y D
t
= k) =
T

l=k
B(l, l k) (21)
Overall, the portfolios value in some time t can be described using the Y D
which in turn can be calculated from the rst exit time distribution and
upper boundary crossing probabilities. Both can be estimated directly from
the data using kernel estimations, or can also be calculated analytically in
certain cases.
14
2.4 Simple Portfolio Choice under Geometric Brown-
ian Motion and Log Utility
We continue by illustrating the technique described above using simple port-
folio consists of a single risky asset whose price follows Geometric Brownian
Motion (GBM), and a single composite asset which does not pay interest
under logarithmic utility. Besides being a frequently used pricing model, the
GBM is a good starting point as it has analytical solution which always
helpful in calibrating, ne-tuning the BCC-based solution.
Let us begin with the case of state state-based adjustment. As we only
have one risky asset and abstracted away from nancial costs, therefore X
t
follow a Geometric Brownian Motion with some drift and standard devi-
ation. As the composite assets do not pay interest, therefore boundaries are
constant. For GBM model under constant boundaries, rst exit time distribu-
tion as well as the upper crossing probability can be calculated analytically.
The rst exit time distribution for Geometric Brownian Motion with variance
normalized to one is given by Salminen and Borodin (2002, p. 295):
fet(t, lb, ub) = e

2
t/2
(e
lb
(ub, ub lb)dt +e
ub
(lb, ub lb)dt) (22)
where ss(.) is the theta function. Substituting the scale density of the Brow-
nian motion results the formula for upper boundary crossing probability.
P(X
GBM
TUL
= UB) =
1 exp(lb
2

2
)
exp(ub
2

2
) exp(lb
2

2
)
(23)
Overall, the rst exit time distribution and the upper boundary crossing
probability both can be calculated analytically. Consequently, the BCC-
distributions and the corresponding log-returns can also be obtained ana-
lytically, without simulations.
In order to create a basis for comparison for these analytical results, let
us briey discuss the case of continuous adjustment, introduced by Merton
(1971) put into perspective for example, by Peters (2011). The portfolio
15
consisting of a risky asset and a risk-free composite asset follows:
dP
t
= (
rr
+w
e
)P
t
dt +wP
t
dW
t
(24)
where
rr
is the return on the risk-free assets assumed to be zero for the pur-
pose of this exercise,
rm
is the market return,
e
=
rm

rr
is the excess
return, w is the weight of the risky asset and nally W
t
is the Wiener pro-
cess. This formulation implicitly assumes that investors can keep a constant
fraction of their wealth in the risky asset which would require continuously
adjusting the number of shares, unless w = 1. The question of interest is
log-optimal value of w. Using Its formula:
dln(P
t
) = (
rr
+w
e

1
2
w
2

2
)dt +wdW
t
(25)
The expected value of the Wiener process is zero, the expected value of the
exponential growth rate can be expressed as:
E(g) = E(
dln(P
t
)
dt
= (
rr
+w
e

1
2
w
2

2
) (26)
Solving for the optimal value and substituting for
rr
= 0 yields:
w
opt
=

2
+
1
2
(27)
Figure below compares the expected growth rate for state-dependent and con-
tinuous adjustment using the diusion parameters based on the closing prices
of the Dow Jones Industrial Average between 1928 and 2012, estimated by
the standard maximum likelihood method detailed, for example, in Gourier-
oux and Jasiak (2001) resulting in mu
ML
= 0.0437 and sigma
ML
= 0.0342.
When calculating state dependent adjustments, we placed the boundaries at
7 standard deviation distance.
Diagram reveals little dierence between the continuous and state-based
adjustment under this setup without transaction costs, therefore the state-
16
Figure 4: State-dependent and Continuous Adjustment Under Geometric
Brownian Motion
based adjustment ceteris paribus (keeping other assumptions unchanged)
does not inuence the results signicantly. Hence, the new approach keeps
the intuition of the simpler model: Replacing the continuous adjustment of
the Mertons model with the boundary-crossing based adjustment ceteris-
paribus does not lead to dierent result. Therefore, results are not driven
by the state-dependent adjustment mechanism we just introduced. This is
good news, as we can extend the analysis to a variety of more complex, more
realistic nancial models as well as to actual security prices without losing
the insights provided by the simpler case. The case involving transaction
cost is comparable with Dumas and Lucianos (1991) paper. Both solutions
are analytical yet they provide a closed form solution while here, we pro-
vide an algorithmic one. Both models assume that investors adjustments are
infrequent, state-based, yet we do not assume that investors are optimally
inattentive. Finally, we work with log-optimal investors while they assume a
more general utility function.
3 Empirical results
The method discussed so far can be used to analyze a large variety of in-
vestment strategies. Here, we will select only a few topic which may be of
general interest. The investor we have in mind in the one who seeks absolute
17
return, therefore our interest is not to outperform certain benchmark index
by selecting its most lucrative components but to chose between dierent as-
set classes in order to maximize the objective function, which is chosen to be
the expected log of the portfolio value. We complement log-optimality with
VAR-based measurements as well. The rst part of this section provides us
with some descriptive results of the past while the second part discusses the
merits of an active, volatility based investment strategy.
We chose log-optimality or growth-optimality because it is considered
to be an important benchmark case having many theoretically appealing
properties as detailed by the large number of papers starting from Kelly,
(1956), and Breiman, (1961) and reviewed recently for example by Chris-
tensen, (2005) or in MacLean, Thorp, and Ziemba, (eds. 2011). In particular
it has been shown by Breiman, (1961) and by Long, (1990) that there exists
a portfolio (growth-optimal portfolio or numeraire portfolio or log optimal
portfolio) for which the price of any other portfolio denominated in the price
of the growth-optimal portfolio becomes supermartingale. Also, this portfolio
is the one that maximizes the expected logarithm of the terminal wealth as
shown by Breiman, (1961) or by Kelly, (1956). It also maximizes the expected
growth rate of the portfolios value as described in Merton and Samuelson,
(1992). Furthermore, growth-optimal strategy maximizes the probability that
the portfolio is more valuable than any other portfolio, therefore has a cer-
tain selective advantage as detailed by Latane, (1959). Among all admissible
portfolios, the growth-optimal portfolio minimizes the expected time needed
to reach, for the rst time, any predetermined constant as shown by Merton
and Samuelson, (1992). If claims are discounted using the growth-optimal
portfolio, then expectation needs to be taken with respect to historical prob-
ability measures as explained in Long, (1990) or Bajeux-Besnainou and R.
Portait, (1997) therefore these portfolios may provide a unifying framework
for asset prices as shown by Platen, (2006).
Let us continue with general data-related issues. As we aim to work with
long time-series therefore we need to combine data series which are sampled
with dierent frequencies. As adjusting our theory to handle sampling is-
sues would create unnecessary complexities, therefore we interpolate lower
18
frequency data to the highest frequency using Brownian bridge for risky as-
sets and step-functions for the composite assets. Note, that we could also
aggregate the lower frequency to the higher one, yet this solution would have
resulted in a loss of information. Estimating the BCC distribution requires
not only closing prices, but minimum and maximum values as well. We have
approximated these values using the minimum and maximum values of the
SP500 index between 1951 and 2014 in two steps. First, we calculated the
ratio of minimum price and closing prices as well as the ratio of maximum
price to closing prices. Next, we matched these ratios to the full periods clos-
ing prices randomly. Finally, multiplying the closing prices with these ratios
resulted in the estimated minimum and maximum prices. Naturally, this ap-
proach ignores certain dependencies, for example intra-day volatility is likely
to be higher in volatile markets. Once again, a more sophisticated approach,
or a Brownian approximation described in Mcleish (2002) would have in-
creased the complexity greatly and we did not see the additional benets of
going down this path.
As for data selection, we work with three asset classes: stocks, bonds
and gold. As for stocks, we have used the SP500 gross total return index
which assumes that dividends are reinvested. For the period 1870 1988,
we have used Shillers data while from 1988 onwards, we have used the total
return index obtained from Chicago Board of Trade. We have abstracted
away from taxation as including it at this stage would increase complexity
signicantly and we do not see any signicant additional benets of going
down this path. We are well aware that investing into the SP index was not
possible before the introduction of ETFs, yet we feel that this index is still be
best approximation of a representative diversied investment. Naturally, our
results suers from common known issues such as survival bias. As for bonds,
we use the Bank of America Merrill Lynch US Corp Master Total Return
Index Value as downloaded from FREDs homepage which can be considered
as a representative investment into corporate bonds. Finally, we used London
Bullion Market Associations daily xing for modeling the return on a gold
investment. Naturally, gold cannot actually be traded on exactly these prices,
yet these data series represent well the prices of an average tradable gold-
19
based instrument. Besides the usual issues, here we also abstract away from
certain commodity-related issues such as the problem of rolling forward the
future contracts. As for the composite asset, we either used the FED target
rate obtained from FREDs database or the long-term interest rate from
Shillers database. This latter is somewhat problematic, yet we could not
obtain short-term rates prior to 1954 and we felt that increasing the amount
of data is more important than accounting for the dierences between the
short rates and long rates.
3.1 Descriptive results
The rst few diagrams focus on the choice between SP-stocks and the com-
posite asset in the United States, between 1870 and 2013 assuming that the
composite asset is the long-term interest rate.
Figure 5: Log-optimal investment and VAR constrains in the United States
The diagram reveals that the log-optimal investment is a leveraged pur-
chase, therefore the use of margin loans does not necessarily implies risk-
taking behavior: risk-averse log-optimal investors may equally use this tool.
The gain from leveraging however is relatively modest: the log-optimal weight
of 1.65 implies only a 63 bps gain in comparison to the buy and hold return.
The VAR levels are decreasing in portfolio weights. It is interesting to no-
tice that a typical brokers recommendation, according to Mandelbrot and
20
Hudson (2014), suggests to hold 25 percent cash, 30 percent bonds, and 45
percent stocks; this latter corresponds to the level where the ve year 5%
VAR of the portfolios value is 1. Based on the diagram above, brokers may
recommend a level where the investment is likely to be recovered, at least
in nominal terms after ve years. Of course, this may be only a coincidence,
but it may also suggests that the average investors have strong preference
against loosing money, and history taught brokers where this level may be.
It is also interesting to notice that downside-risk falls into two categories:
for moderate weights, the ve year downside risk is larger than the ten year
downside risk, while for higher weights, it is the other way around. Therefore,
there are downside-increasing and downside-decreasing allocations.
Figure 6: Evolution of downsize risk in time for various portfolio weights
This diagram complements to the debate on whether the optimal frac-
tion of wealth invested into stocks is horizon-dependent or not. Here, we
show that if investors preference includes downside, then investment rules
are horizon-dependent. This is in line with common wisdom, described by
Malkiel (1999), which states that the brokers typical recommendation is
a horizon-dependent one: The longer period over which you can hold on
to your investment, the greater should be the share of common stocks in
your portfolio. Early academic papers such as Samuelson (1969) and Mer-
21
ton (1971) derived horizon-independent rules which goes against this com-
mon wisdom. Consequently authors, for example Brennan, Schwartz and
Lagnado (1997), Liu and Loewenstein (2002) and Brandt (2009) proposed
many adjustments to these early models, such as time-varying investment
opportunities, time-varying parameters, transaction costs, or predictability
of dividends growth, which may result in horizon-dependent rules. Here, we
complement these nding by noticing the dierence in the downside risks
evolution: in order to explain horizon-dependence, it is sucient to assume
that older investors are more concerned with preserving their wealth and put
more emphasize on VAR limits, while younger ones are more focused on the
potential upside.
So far, we have assumed that optimal weight is time-independent which
may or may not be the case. In fact, certain active investment practices,
commonly named market-timing, assume that optimal investment decision
is time-dependent. These practices are built on two premises: First, they
assume that optimal portfolio weights are time-dependent and second they
assume that these weights are predictable. Here, we will focus on the rst
premise. In the parametric realm, it is often assumed that parameters are
time-dependent, which can be translated into the non-parametric realm by
saying that let us assume that the data-generating process is time-dependent.
In fact, assuming a seven years holding period reveals that the optimal
portfolio weights appear to be highly time-dependent, ranging from zero to
over ten. Of course, part of the variation is due to the fact, that we rely
on signicantly less data and hence the measurement is much more noisy.
Nevertheless, even without having a precise measure of the noise, it is still
plausible to say that log-optimal investment appears to be time-varying and
it may be log-optimal to occasionally hold leveraged positions.
Next, we continue by allowing for more type of assets and analyze the
decision of an investor who has access to three type of assets: stocks, corporate
bonds and gold, between 1975 and 2013 assuming that the composite asset
is the short-term interest rate. The weights for the log-optimal investment
consists of 0.5 for Gold, 4.55 for corporate bonds and 1.85 for stocks. The
overall optimal exposure dened as the sum of risky assets is 6.9.
22
Figure 7: How Log-optimal investment over multiple assets change over time?
Figure 8: Log-optimal investment for a portfolio consists of gold, corporate
bonds and stocks.
The general tendency is probably more of an interest then the actual val-
ues: it was log-optimal to borrow 200 bps over the short-term rate and invest
mostly into corporate bonds and stocks. The gain from leveraging appears
to be more signicant than in the previous case. Once again, we have showed
23
that margin purchase does not necessarily involve risk-seeking behavior and
a risk-averse, log-optimal investors may also rely on this nancial tool. Let
us nish this section by briey reviewing how lop-optimal investment varies
over time in this case.
Figure 9: Log-optimal investment for a portfolio consists of gold, corporate
bonds and stocks.
It appears to be log-optimal to nance the purchase of corporate bonds
using short-term debt. Likewise, leveraged purchase of stocks typically in-
creases the portfolios expected growth rate. The leveraged purchase of Gold
has become log-optimal after the year 2000 which may be a structural is-
sue related with commodity markets, or may also be caused by the rumors
concerning the limited availability of the remaining global gold stocks.
3.2 Managing portfolio weights using predicted volatil-
ity
As already noted above, market timing is built on two premises. First, we
have to assume that optimal portfolio weights are time-varying. The previous,
descriptive, section provides some evidence on this possibility. In this section,
24
we deal with the issue of predictability. Due to the practical interest of this
topic, there is such a large number of studies aiming to provide insights in
this regard, that we do not even have room nor to review, nor to test them.
Besides, the validity of these predicting rules are often questioned due to
data-snooping bias.
Here, we take an alternative route: rst of all we do not apply optimiza-
tion, but presents a method which increases the growth rate of the portfolios
value regardless of chosen portfolio weights. The method we suggest is based
on facts that are accepted by overwhelming number of scientists and practi-
tioners as well. More specically, we propose to change the portfolios weights
based on the predicted volatility. This approach is built on two premises: rst
of all, the possibility of predict volatility it is well known as detailed in the
large number of studies starting perhaps from Bollerslev (1987). Second, we
also know that under Geometric Brownian Motion, the optimal portfolio
weights are inversely related to the volatility. Hence, it is reasonable to as-
sume that the insights gained from the GBM model also carries over to actual
security data. Overall, intuition suggests that it makes sense to reduce the
portfolios weight if the predicted volatility is relatively large, and increase
it if the predicted volatility is low. We try to verify or falsify this intuition
assuming only one risky asset, the total return index of SP500 between 1988
and 2013 using the following algorithm:
1. We estimate a GARCH model using Matlabs GARCH package based
on the previous 2520 observations which corresponds to approximately
10 years of observation. When doing the estimation, we use the Glosten,
Jagannathan and Runkle (1993)s specication (further referred as
GJR model) which includes a leverage terms for modeling asymmet-
ric volatility clustering. The rst estimation period ranges from 1978 -
1988 end of May, and the estimated model predicts the volatility until
the end of June. The second estimation ranges from 1978 - 1988, end of
June, and the estimated model provides us with the estimated volatil-
ity until end of July. Therefore, we adopt a rolling method and hence
when making the predictions, we do not rely on any information which
has not been revealed previously.
25
2. We calculate the BCC distributions using the total return indexes.
Each time there is a boundary-crossing, we set the quantities so that
the portfolio weights are equal to some base weights multiplied by
the ratio of the actual and the historical variance: W

= W
base

max(min(

2

2
predicted
, 4), 0.25). We restrict the variance correction factor
in order to ensure the stability and the precision of the measurement.
The W
base
ranges from 0.15 to 5.
3. Finally, we also calculate the BCC distributions and the corresponding
expected growth rate for the case of constant portfolio weights as well.
We plot the portfolios values expected growth rate as a function of the
average portfolio weights. The resulting diagram reveals that such volatility-
based adjustment results in a higher expected growth rate, regardless of the
chosen average portfolio weights. The log-optimal expected growth rate rep-
resents a considerable gain of 700 bps in comparison to the constant portfolio
weights.
Figure 10: Log-optimal investment for constant and volatility-dependent
portfolio weights.
Also, the average portfolio weight for the log-optimal portfolio under
volatility-dependent weights is signicantly higher than for the case of con-
stant weights. Overall, it appears to be log-optimal to take a leveraged pur-
26
chase on diversied stock index, but the exposure needs to be reduced in
high-volatility periods. It is not dicult to see that such behavior, if acted
upon in practice, has serious implication for nancial stability. It is often ar-
gued that the role of speculative capital is to provide liquidity, and liquidity
is needed most in high volatile periods. Yet, we found that is is log-optimal to
partially withdrawn from the market in these high-volatility period, therefore
speculative capital may disappear in times when it is the most needed.
4 Summary
In social sciences, researchers have to rely on a few, more or less imperfect
technique: We can build an analytical model, study the past, look for natural
experiences, or draw conclusion from organized experiments. Each technique
has its merits and its weaknesses. In this paper, we have introduced a new,
non-parametric technique, which relies on studying the past: we have ana-
lyzed the problem of portfolio allocation using historical data. The novelty of
this paper is two fold: in one hand, we have introduced a new, non-parametric
technique, summarized rst, and we have obtained some interesting result us-
ing this technique, summarized next.
1. As we solve the portfolio problem without parametric assumptions,
therefore we can incorporate many important features of nancial data,
such as volatility-clustering or dependencies between the prices.
2. Also, our method describes the portfolios value using a discrete
stochastic variable, hence it allows us to calculate not only the ex-
pected value, but also many other stochastic properties, such as VAR
levels without parametric assumptions or simulations.
3. The technique can be calibrated by analytically solving for the log-
optimal portfolio under Geometric Brownian Motion.
4. Finally, we can easily deal with many practical issues, such as propor-
tional transaction costs or the cost of margin nancing.
27
Regarding potential limitations and weaknesses, our approach requires com-
plete market, we have to assume that transactions can be done at the desired
price level. In other word, we abstract away from execution risk. In non-liquid
instruments, especially for large participants, execution risk may be signi-
cant, therefore incorporating it into the framework may be meaningful and it
may be done in a forthcoming paper. Also, we implicitly assume that prices
are continuous, therefore at this stage we do not allow for discontinuities.
This is not a serious issue in the current paper since we have worked with
indexes, where such discontinuities are relatively rare. Incorporating jumps
would inuence risk level as well as the no-ruin conditions, all may be dis-
cussed in a forthcoming paper.
Regarding the actual results on historical data, we acknowledge that it
would be a great mistake to assume that the future will be like the past, yet
understanding what has happened should at least be indicative in guring
our what may come in the future. One way to summarize historical results
is to translate them into stylized facts which will be done next.
1. Historical data suggests that not too extensive leveraged purchase of
common stocks is log-optimal therefore leveraged purchase does not
imply risk-seeking behavior: risk averse investors may also rely on this
technique if their risk-aversion is not too high.
2. Depending on the weight of the risky asset, downside risk measured
as 5% VAR of the portfolios value may be a decreasing or increasing
function of time. Consequently, objective functions which incorporate
the VAR of the portfolios value or operate with VAR constrains result
in horizon-dependent portfolio weights.
3. Optimal portfolio weight appear to be time-dependent even if we con-
sider relatively long investment horizon of seven years.
4. Irrespective of the chosen average portfolio weight, adjusting the expo-
sure to the risky asset in the function of the predicted volatility ceteris
paribus result in higher expected growth rate than constant portfolio
28
weights. Therefore, it is log-optimal to reduce exposure to stock market
in high volatility periods and increase it in low volatility periods.
Interpretation of our nding can be done in a theoretical and in a policy
level as well. From a theoretical point of view, the term stability was im-
ported from natural sciences by Samuelson (1947), where it often referred as
Le Chatelier principle. This principle roughly states that if a closed system
is subjected to an external shock, then the system shifts to counteract this
shock. Analogically, one may argue that the nancial system is a closed sys-
tem, at least in a short run
6
, which reacts to the real shocks. The stability
of the nancial system in this context involves analyzing investors response
to these shocks. The following, simple, illustrative numerical example will
hopefully prove useful in explaining why leveraged positions may pose an
issue concerning nancial stability.
Figure 11: The reaction to a positive real shock depending on the weight of
the risky asset.
The table above compares the reaction given to a positive real shock de-
pending on the weight of the risky asset. In both cases, we have assumed that
the desired portfolio weights are unaected by the real shock. The chronology
6
Of course, the analogy is by far not perfect as equity market inuences the real econ-
omy in many ways, such as via equity withdrawal, via expectations, etc.
29
of the though experiment is as follows: First, there is a positive shock which
result in a price increase. Second, investors observe the price increase and
react in order to restore the desired portfolio weights. Third, this reaction
inuences the prices again. The question whether this third inuence coun-
teracts or amplies the original shock. The example reveals that an investor,
who holds an unleveraged position, is likely to counteracts the original real
shock. Therefore, she acts in line with the Le Chatelier principle. On the
other hand, an investor, who holds a leveraged position, is likely to react in
a way which amplies the original shock, which is not in line with the Le
Chatelier principle. That is why, from a theoretical perspective, leveraged
positions may be an issue for the stability of the nancial system. Of course,
this mechanism has been described by many, in a slightly dierent context,
starting from Bogen and Krooss (1960) and is sometimes named as pyramid-
ing. Our contribution is to show that there is a tendency in actual nancial
market to favour those, who hold leveraged positions, and these leveraged
positions may also be held by risk-averse, log-optimal investors. On a policy
level, some authors, for example Shiller (2005), or Hardouvelis and Theodos-
siou (2002) proposes the Fed to return to more active margin policy, such
as the one between 1934 and 1974. Based on our nding, log-optimal invest-
ment strategies may involve leveraged purchases, therefore these policies may
eectively inuence log-optimal investors.
References
[1] Isabelle Bajeux Besnainou and Roland Portait. The numeraire portfolio:
A new perspective on nancial theory. The European Journal of Finance,
3(4):291309, 1997.
[2] Pierluigi Balduzzi and Anthony W Lynch. Transaction costs and pre-
dictability: Some utility cost calculations. Journal of Financial Eco-
nomics, 52(1):4778, 1999.
30
[3] Pierluigi Balduzzi and Anthony W Lynch. Predictability and transaction
costs: The impact on rebalancing rules and behavior. The Journal of
Finance, 55(5):22852309, 2000.
[4] William J. Baumol. The transactions demand for cash: An inventory
theoretic approach. Quarterly Journal of Economics, 66(4):545556,
1952.
[5] Michael J Best and Robert R Grauer. Sensitivity analysis for mean-
variance portfolio problems. Management Science, 37(8):980989, 1991.
[6] Avrim Blum and Adam Kalai. Universal portfolios with and without
transaction costs. Machine Learning, 35(3):193205, 1999.
[7] Jules Irwin Bogen and Herman Edward Krooss. Security credit: Its
economic role and regulation. Prentice-Hall, 1960.
[8] Tim Bollerslev. A conditionally heteroskedastic time series model for
speculative prices and rates of return. The Review of Economics and
Statistics, pages 542547, 1987.
[9] Andrei N Borodin and Paavo Salminen. Handbook of Brownian motion:
facts and formulae. Springer, 2002.
[10] Michael W Brandt. Estimating portfolio and consumption choice: A con-
ditional euler equations approach. The Journal of Finance, 54(5):1609
1645, 1999.
[11] Michael W. Brandt. Hedging demands in hedging contingent claims.
Review of Economics and Statistics, 85(1):119140, 2003.
[12] Michael W Brandt. Portfolio choice problems. Handbook of Financial
Econometrics, 1:269336, 2009.
[13] Leo Breiman. Optimal gambling systems for favorable games, 1961.
[14] Michael J. Brennan, Eduardo S. Schwartz, and Ronald Lagnado. Strate-
gic asset allocation. Journal of Economic Dynamics and Control,
21(8):13771403, 1997.
31
[15] John Y Campbell and Samuel B Thompson. Predicting excess stock
returns out of sample: Can anything beat the historical average? Review
of Financial Studies, 21(4):15091531, 2008.
[16] Vijay Kumar Chopra and William T Ziemba. The eect of errors in
means, variances, and covariances on optimal portfolio choice. The Jour-
nal of Portfolio Management, 19(2):611, 1993.
[17] Morten Mosegaard Christensen. On the history of the growth optimal
portfolio. Preprint, University of Southern Denmark, 389, 2005.
[18] George M Constantinides. Capital market equilibrium with transaction
costs. Journal of Political Economy, 94:842862, 1986.
[19] Rama Cont. Empirical properties of asset returns: stylized facts and
statistical issues. Quantitative Finance, 1(2), 2001.
[20] Thomas M Cover. Universal portfolios. Mathematical Finance, 1(1):1
29, 1991.
[21] Bernard Dumas and Elisa Luciano. An exact solution to a dynamic port-
folio choice problem under transactions costs. The Journal of Finance,
46(2):577595, 1991.
[22] Peter Farkas. Counting process generated by boundary-crossing events.
theory and statistical applications. Working paper (available at
http://http://econpapers.repec.org/paper/ceueconwp/), Central Euro-
pean University, 2013.
[23] Peter Fortune. Margin requirements, margin loans, and margin rates:
Practice and principles. New England Economic Review, pages 1944,
2000.
[24] Peter Fortune. Margin lending and stock market volatility. New England
Economic Review, pages 326, 2001.
32
[25] Lawrence R Glosten, Ravi Jagannathan, and David E Runkle. On the
relation between the expected value and the volatility of the nominal
excess return on stocks. The Journal of Finance, 48(5):17791801, 1993.
[26] Christian Gourieroux and Joann Jasiak. Financial econometrics: Prob-
lems, models, and methods, volume 1. Princeton University Press Prince-
ton, NJ, 2001.
[27] L aszl o Gy or and Istv an Vajda. Growth optimal investment with trans-
action costs. In Algorithmic Learning Theory, pages 108122. Springer,
2008.
[28] Gikas A Hardouvelis and Panayiotis Theodossiou. The asymmetric re-
lation between initial margin requirements and stock market volatility
across bull and bear markets. Review of Financial Studies, 15(5):1525
1559, 2002.
[29] M ark Horv ath and Andras Urb an. Growth optimal portfolio selection
with short selling and leverage. Machine Learning for Financial Engi-
neering, eds. L. Gyor, G. Ottucsak, H. Walk, Imperial College Press,
pages 151176, 2011.
[30] J. David Jobson and Bob Korkie. Estimation for markowitz ecient
portfolios. Journal of the American Statistical Association, 75(371):544
554, 1980.
[31] Samuel Karlin and Howard M Taylor. A second course in stochastic
processes, volume 2. Gulf Professional Publishing, 1981.
[32] Samuel Karlin and Howard M Taylor. An Introduction to Stochastic
Modeling. Academic Press, 1998.
[33] John L Kelly. A new interpretation of information rate. Information
Theory, IRE Transactions on, 2(3):185189, 1956.
[34] Henry Allen Latane. Criteria for choice among risky ventures. The
Journal of Political Economy, pages 144155, 1959.
33
[35] Hong Liu and Mark Loewenstein. Optimal portfolio selection with trans-
action costs and nite horizons. Review of Financial Studies, 15(3):805
835, 2002.
[36] Andrew W. Lo and A.C. MacKinlay. A Non-Random Walk Down Wall
Street. 1999.
[37] John Long. The numeraire portfolio. Journal of Financial Economics,
26(1):2969, 1990.
[38] Leonard C. MacLean, Edward O. Thorp, and William T. Ziemba. The
Kelly capital growth investment criterion: Theory and practice, volume 3.
world scientic, 2010.
[39] Burton G. Malkiel and Eugene F Fama. Ecient capital markets: A re-
view of theory and empirical work*. The Journal of Finance, 25(2):383
417, 1970.
[40] Burton Gordon Malkiel. A random walk down Wall Street: including a
life-cycle guide to personal investing. WW Norton & Company, 1999.
[41] Benoit Mandelbrot and Richard L. Hudson. The Misbehavior of Markets:
A fractal view of nancial turbulence. Basic Books, 2014.
[42] Harry Markowitz. Portfolio selection*. The Journal of Finance, 7(1):77
91, 1952.
[43] Lionel Martellini and Philippe Priaulet. Competing methods for option
hedging in the presence of transaction costs. The Journal of Derivatives,
9(3):2638, 2002.
[44] Donald L Mcleish. Highs and lows: Some properties of the extremes of
a diusion and applications in nance. Canadian Journal of Statistics,
30(2):243267, 2002.
[45] Peter Medvegyev. Stochastic integration theory. Number 14. Oxford
University Press Oxford, 2007.
34
[46] Robert C. Merton. Lifetime portfolio selection under uncertainty: The
continuous-time case. Review of Economics and Statistics, 51(3):247
257, 1969.
[47] Robert C. Merton. Optimum consumption and portfolio rules in a
continuous-time model. Journal of Economic Theory, 3(4):373413,
1971.
[48] Robert C. Merton and P. A. Samuelson. Continuous-time nance. 1992.
[49] Ole Peters. Optimal leverage from non-ergodicity. Quantitative Finance,
11(11):15931602, 2011.
[50] Eckhard Platen. A benchmark approach to nance. Mathematical Fi-
nance, 16(1):131151, 2006.
[51] P. A. Samuelson. Foundation of economic analysis. 1947.
[52] P. A. Samuelson. Lifetime portfolio selection by dynamic stochastic
programming. Review of Economics and Statistics, (51):239246, 1969.
[53] Robert J. Shiller. Irrational exuberance. Random House LLC, 2005.
[54] James Tobin. The interest-elasticity of transactions demand for cash.
Cowles Foundation for Research in Economics at Yale University, 1956.
[55] Paul Wilmott. Derivatives. 1998.
5 Appendix
The following section describes how to set up the boundary structure in a way
that change in portfolios value can only take two discrete values. Since we will
only consider two separate moments of time, therefore the variable describing
the second period are indicated with V

. The change in the portfolios value


between these two periods can be described as follows.
V

= V + TR +DBS (28)
35
where is the prot or loss, TR is the transaction cost while DBS describes
the cost of nancing. Let us substitute each elements.
=

j
q
j
P
j
=

j
w
j
V
P
j
P
j
= V

j
w
j
p
j
(29)
where j is the number of assets in the portfolio and p
j
is the price change
since the last boundary crossing event, measured in percentage terms. Since
there is no change in the quantity between two boundary crossing events,
therefore the prot is the weighted average price change.
TR =

j
abs(q

j
q
j
) P

j
tr =

j
(q

j
q
j
) P

j
tr
j
(30)
where tr
j
= tr if q

j
> q
j
and tr
j
= tr otherwise. Substituting out quantities
yields:
TR =

j
(
w

j
V

w
j
V
P
j
) P

j
tr
j
(31)
Simplifying results
TR = V

j
w

j
tr
j
V

j
w
j
P

j
P
j
tr
j
(32)
Finally, nancing can be calculated directly from the data and need to be
expressed as:
DBS = V dbs(t); (33)
combining yields
V

(1

j
w

j
tr
j
) = V (1 +

j
w
j
p
j

j
w
j
P

j
P
j
tr
j
+dbs(t)) (34)
36
Therefore the change in wealth between two boundary crossings are equal to:
V

V
=
(1 +

j
w
j
p
j

j
w
j
P

j
P
j
tr
j
+dbs(t)
1

j
w

j
tr
j
= X(t) (35)
Further, we will refer to X(t) as signal. Boundary crossing counting process
counts how many times the signal crosses the boundaries.
X(t) =
_
_
_
GU for upper crossings
GL for lower crossings
(36)
For ease of calculation, it is advisable to chose GL = 1/GU. The algorithm
for calculating the boundary crossing counting is than as follows.
1. Initiate the signal by setting X
[
0] = (0).
2. Calculate the value of the signal for each consecutive observations. This
calculation involves two steps. One hand hand, we have to account
for the change in prices using minimum prices for lower crossings and
maximum prices for upper crossings. Also, we have to take into account
nancing, during which we have assumed that interest on deposit is paid
after the period while interest on lending or on shortselling is collected
in advance.
3. Calculate the weighted percentage change in the portfolio values us-
ing minimum prices for lower crossings and maximum prices for upper
crossings.
4. update Y U = Y U +k and Y L = Y L+k for upper and lower crossings
respectively.
where k N is the largest natural number to which the inequality holds.
Most of the time k = 1 yet occasionally upon large changes, it may be greater
as discussed in Farkas, (2013).
37

You might also like