You are on page 1of 71

QuantStrat TradeR

Trading, QuantStrat, R, and more.

Category Archives: Data Analysis


An Introduction to Change Points (packages: ecp
and BreakoutDetection)
Posted on January 21, 2015 Posted in Data Analysis, R, Replication, Volatility Tagged R 7 Comments

A forewarning, this post is me going out on a limb, to say the least. In fact, its a post/project requested from
me by Brian Peterson, and it follows a new paper that hes written on how to thoroughly replicate research
papers. While Ive replicated results from papers before (with FAA and EAA, for instance), this is a first for
me in terms of what Ill be doing here.

In essence, it is a thorough investigation into the paper Leveraging Cloud Data to Mitigate User Experience
from Breaking Bad, and follows the process from the aforementioned paper. So, here we go.

*********************

Twitter Breakout Detection Package


Leveraging Cloud Data to Mitigate User Experience From Breaking Bad

Summary of Paper

Introduction: in a paper detailing the foundation of the breakout detection package (arXiv ID 1411.7955v1),
James, Kejariwal, and Matteson demonstrate an algorithm that detects breakouts in twitters production-
level cloud data. The paper begins by laying the mathematical foundation and motivation for energy
statistics, the permutation test, and the E-divisive with medians algorithm, which create a fast way of
detecting a shift in median between two nonparametric distributions that is robust to the presence of
anomalies. Next, the paper demonstrates a trial run through some of twitters production cloud data, and
compares the non-parametric E-divisive with medians to an algorithm called PELT. For the third topic, the
paper discusses potential applications, one of which is quantitative trading/computational finance. Lastly, the
paper states its conclusion, which is the addition of the E-divisive with medians algorithm to the existing
literature of change point detection methodologies.

The quantitative and computational methodologies for the paper use a modified variant of energy statistics
more resilient against anomalies through the use of robust statistics (viz. median). The idea of energy
statistics is to compare the distances of means of two random variables contained within a larger time series.
The hypothesis test to determine if this difference is statistically significant is called the permutation test,
which permutes data from the two time series a finite number of times to make the process of comparing
permuted time series computationally tractable. However, the presence of anomalies, such as in twitters
production cloud data, would limit the effectiveness of using this process when using simple means. To that
end, the paper proposes using the median, and due to the additional computational time resulting from the
weaker distribution assumptions to extend the generality of the procedure, the paper devises the E-divisive
with medians algorithms, one of which works off of distances between observations, and one works with the
medians of the observations themselves (as far as I understand). To summarize, the E-divisive with medians
algorithms exist as a way of creating a computationally tractable procedure for determining whether or not a
new chunk of time series data is considerably different from the previous through the use of advanced
distance statistics robust to anomalies such as those present in twitters cloud data.

To compare the performance of the E-divisive with medians algorithms, the paper compares the algorithms
to an existing algorithm called PELT (which stands for Pruned Extract Linear Time) in various quantitative
metrics, such as Time To Detect, meaning the exact moment of the breakout to when the algorithms report
it (if at all), along with precision, recall, and the F-measure, defined as the product of precision and recall
over their respective sum. Comparing PELT to the E-divisive with medians algorithm showed that the E-
divisive algorithm outperformed the PELT algorithm in the majority of data sets. Even when anomalies were
either smoothed by taking the rolling median of their neighbors, or by removing them altogether, the E-
divisive algorithm still outperformed PELT. Of the variants of the EDM algorithm (EDM head, EDM tail,
and EDM-exact), the EDM-tail variant (i.e. the one using the most recent observations) was also quickest to
execute. However, due to fewer assumptions about the nature of the underlying generating distributions, the
various E-divisive algorithms take longer to execute than the PELT algorithm, with its stronger assumptions,
but worse general performance. To summarize, the EDM algorithms outperform PELT in the presence of
anomalies, and generally speaking, the EDM-tail variant seems to work best when considering
computational running time as well.

The next section dealt with the history and applications of change-point/breakout detection algorithms, in
fields such as finance, medical applications, and signal processing. As finance is of a particular interest, the
paper acknowledges the ARCH and various flavors of GARCH models, along with the work of James and
Matteson in devising a trading strategy based on change-point detection. Applications in genomics to detect
cancer exist as well. In any case, the paper cites many sources showing the extension and applications of
change-point/breakout detection algorithms, of which finance is one area, especially through work done by
Matteson. This will be covered further in the literature review.

To conclude, the paper proposes a new algorithm called the E-divisive with medians, complete with a new
statistical permutation test using advanced distance statistics to determine whether or not a time series has
had a change in its median. This method makes fewer assumptions about the nature of the underlying
distribution than a competitive algorithm, and is robust in the face of anomalies, such as those found in
twitters production cloud data. This algorithm outperforms a competing algorithm which possessed stronger
assumptions about the underlying distribution, detecting a breakout sooner in a time series, even if it took
longer to run. The applications of such work range from finance to medical devices, and further beyond. As
change-point detection is a technique around which trading strategies can be constructed, it has particular
relevance to trading applications.

Statement of Hypothesis

Breakouts can occur in data which does not conform to any known regular distribution, thus rendering
techniques that assume a certain distribution less effective. Using the E-divisive with medians algorithm, the
paper attempts to predict the presence of breakouts using time series with innovations from no regular
distribution as inputs, and if effective, will outperform an existing algorithm that possesses stronger
assumptions about distributions. To validate or refute a more general form of this hypothesis, which is the
ability of the algorithm to detect breakouts in a timely fashion, this summary test it on the cumulative
squared returns of the S&P 500, and compare the analysis created by the breakpoints to the analysis
performed by Dr. Robert J. Frey of Keplerian Finance, a former managing director at Renaissance
Technologies.

Literature Review

Motivation

A good portion of the practical/applied motivation of this paper stems from the explosion of growth in
mobile internet applications, A/B testing, and other web-specific reasons to detect breakouts. For instance,
longer loading time on a mobile web page necessarily results in lower revenues. To give another example,
machines in the cloud regularly fail.

However, the more salient literature regarding the topic is the literature dealing with the foundations of the
mathematical ideas behind the paper.

Key References

Paper 1:

David S. Matteson and Nicholas A. James. A nonparametric approach for multiple change point analysis of
multivariate data. Journal of the American Statistical Association, 109(505):334345, 2013.

Thesis of work: this paper is the original paper for the e-divisive and e-agglomerative algorithms, which are
offline, nonparametric methods of detecting change points in time series. Unlike Paper 3, this paper lays out
the mathematical assumptions, lemmas, and proofs for a formal and mathematical presentation of the
algorithms. Also, it documents performance against the PELT algorithm, presented in Paper 6 and
technically documented in Paper 5. This performance compares favorably. The source paper being replicated
builds on the exact mathematics presented in this paper, and the subject of this report uses the ecp R package
that is the actual implementation/replication of this work to form a comparison for its own innovations.

Paper 2:

M. L. Rizzo and G. J. Szekely. DISCO analysis: A nonparametric extension of analysis of variance. The
Annals of Applied Statistics, 4(2):10341055, 2010

Thesis of work: this paper generalizes the ANOVA using distance statistics. This technique aims to find
differences among distributions outside their sample means. Through the use of distance statistics, the
techniques aim to more generally answer queries about the nature of distributions (EG identical means, but
different distributions as a result of different factors). Its applicability to the source paper is that it forms the
basis of the ideas for the papers divergence measure, as detailed in its second section.

Paper 3:

Nicholas A. James and David S. Matteson. ecp: An R package for nonparametric multiple change point
analysis of multivariate data. Technical report, Cornell University, 2013.

Thesis of work: the paper introduces the ecp package which contains the e-agglomerative and e-divisive
algorithms for detecting change points in time series in the R statistical programming language (in use on at
least one elite trading desk). The e-divisive method recursively partitions a time series and uses a
permutation test to determine change points, but it is computationally intensive. The e-agglomerative
algorithm allows for inputs from the user for initial time-series segmentation and is a computationally faster
algorithm. Unlike most academic papers, this paper also includes examples of data and code in order to
facilitate the use of these algorithms. Furthermore, the paper includes applications to real data, such as the
companies found in the Dow Jones Industrial Index, further proving the effectiveness of these methods. This
paper is important to the topic in question as the E-divisive algorithm created by James and Matteson form
the base changepoint detection process for which the paper builds its own innovations for, and visually
compares against; furthermore, the source paper restates many of the techniques found in this paper.

Paper 4:

Owen Vallis, Jordan Hochenbaum, and Arun Kejariwal. A novel technique for long-term anomaly detection
in the cloud. In 6th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 14), June 2014.
Thesis of work: the paper proposes the use of piecewise median and median absolute deviation statistics to
detect anomalies in time series. The technique builds upon the ESD (Extreme Studentized Deviate)
technique and uses piecewise medians to approximate a long-term trend, before extracting seasonality
effects from periods shorter than two weeks. The piecewise median method of anomaly detection has a
greater F-measure of detecting anomalies than does the standard STL (seasonality trend loess
decomposition) or quantile regression techniques. Furthermore, piecewise median executes more than three
times faster. The relevance of this paper to the source paper is that it forms the idea of using robust statistics
and building the techniques in the paper upon the median as opposed to the mean.

Paper 5:

Rebecca Killick and Kaylea Haynes. changepoint: An R package for changepoint analysis

Thesis of work: manual for the implementation of the PELT algorithm written by Rebecca Killick and
Kaylea Haynes. This package is a competing change-point detection package, mainly focused around the
Pruned Extraction Linear Time algorithm, although containing other worse algorithms, such as the segment
neighborhoods algorithm. Essentially, it is a computational implementation of the work in Paper 2. Its
application toward the source paper is that the paper at hand compares its own methodology against PELT,
and often outperforms it.

Paper 6:

Rebecca Killick, Paul Fearnhead, and IA Eckley. Optimal detection of changepoints with a linear
computational cost. Journal of the American Statistical Association, 107(500):15901598, 2012

Thesis of work: the paper proposes an algorithm (PELT) that scales linearly in running time with the size of
the input time series to detect exact locations of change points. The paper aims to replace both an
approximate binary partitioning algorithm, and an optimal segmentation algorithm that doesnt involve a
pruning mechanism to speed up the running time. The paper uses an MLE algorithm at the heart of its
dynamic partitioning in order to locate change points. The relevance to the source paper is that through the
use of the non-robust MLE procedure, this algorithm is vulnerable to poor performance due to the presence
of anomalies/outliers in the data, and thus underperforms the new twitter change point detection
methodology which employs robust statistics.

Paper 7:

Wassily Hoeffding. The strong law of large numbers for u-statistics. Institute of Statistics mimeo series, 302,
1961.

Thesis of work: this paper establishes a convergence of the mean of tuples of many random variables to the
mean of said random variables, given enough such observations. This paper is a theoretical primer on
establishing the above thesis. The mathematics involve use of measure theory and other highly advanced and
theoretical manipulations. Its relevance to the source paper is in its use to establish a convergence of an
estimated characteristic function.

Similar Work

In terms of financial applications, the papers covering direct applications of change points to financial time
series are listed above. Particularly, David Matteson presented his ecp algorithms at R/Finance several years
ago, and his work is already in use on at least one professional trading desk. Beyond this, the paper cites
works on technical analysis and the classic ARCH and GARCH papers as similar work. However, as this
change point algorithm is created to be a batch process, direct comparison with other trend-following (that
is, breakout) methods would seem to be a case of apples and oranges, as indicators such as MACD,
Donchian channels, and so on, are online methods (meaning they do not have access to the full data set like
the e-divisive and the e-divisive with medians algorithms do). However, they are parameterized in terms of
their lookback period, and are thus prone to error in terms of inaccurate parameterization resulting from a
static lookback value.

In his book Cycle Analytics for Traders, Dr. John Ehlers details an algorithm for computing the dominant
cycle of a securitythat is, a way to dynamically parameterize the lookback parameter, and if this were to
be successfully implemented in R, it may very well allow for improved breakout detection methods than the
classic parameterized indicators popularized in the last century.

References With Implementation Hints

Reference 1: Breakout Detection In The Wild

This blog post is a reference contains the actual example included in the R package for the model, written by
one of the authors of the source paper. As the data used in the source paper is proprietary twitter production
data, and the model is already implemented in the package discussed in this blog post, this makes the
package and the included data the go-to source for starting to work with the results presented in the source
paper.

Reference 2: Twitter BreakoutDetection R package evaluation

This blog post is that of a blogger altering the default parameters in the model. His analysis of traffic to his
blog contains valuable information as to greater flexibility in the use of the R package that is the
implementation of the source paper.

Data

The data contained in the source paper comes from proprietary twitter cloud production data. Thus, it is not
realistic to obtain a copy of that particular data set. However, one of the source papers co-authors, Arun
Kejariwal, was so kind as to provide a tutorial, complete with code and sample data, for users to replicate at
their convenience. It is this data that we will use for replication.

Building The Model

Stemming from the above, we are fortunate that the results of the source paper have already been
implemented in twitters released R package, BreakoutDetection. This package has been written by Nicholas
A. James, a PhD candidate at Cornell University studying under Dr. David S. Matteson. His page is located
here.

In short, all that needs to be done on this end is to apply the model to the aforementioned data.

Validate the Results

To validate the resultsthat is, to obtain the same results as one of the source papers authors, we will
execute the code on the data that he posted on his blog post (see Reference 1).

1require(devtools)
2install_github(repo="BreakoutDetection", username="twitter")
3require(BreakoutDetection)
4
5data(Scribe)
6res = breakout(Scribe, min.size=24, method='multi', beta=.001, degree=1, plot=TRUE)
res$plot
7

This is the resulting image, identical from the blog post.


Validation of the Hypothesis

This validation was inspired by the following post:

The Relevance of History

The post was written by Dr. Robert J. Frey, professor of Applied Math and Statistics at Stony Brook
University, the head of its Quantitative Finance program, and former managing director at Renaissance
Technologies (yes, the Renaissance Technologies founded by Dr. Jim Simons). While the blog is inactive at
the moment, I sincerely hope it will become more active again.

Essentially, it uses mathematica to detect changes in the slope of cumulative squared returns, and the final
result is a map of spikes, mountains, and plains, the x-axis being time, and the y-axis the annualized standard
deviation. Using the more formalized e-divisive and e-divisive with medians algorithms, this analysis will
attempt to detect change points, and use the PerformanceAnalytics library to compute the annualized
standard deviation from the data of the GSPC returns itself, and output a similarly-formatted plot.

Heres the code:

1
2 require(quantmod)
require(PerformanceAnalytics)
3
4 getSymbols("^GSPC", from = "1984-12-25", to = "2013-05-31")
5 monthlyEp <- endpoints(GSPC, on = "months")
6 GSPCmoCl <- Cl(GSPC)[monthlyEp,]
7 GSPCmoRets <- Return.calculate(GSPCmoCl)
8 GSPCsqRets <- GSPCmoRets*GSPCmoRets
GSPCsqRets <- GSPCsqRets[-1,] #remove first NA as a result of return computation
9 GSPCcumSqRets <- cumsum(GSPCsqRets)
10 plot(GSPCcumSqRets)
11

This results in the following image:


So far, so good. Lets now try to find the amount of changepoints that Dr. Freys graph alludes to.

1
2 t1 <- Sys.time()
ECPmonthRes <- e.divisive(X = GSPCsqRets, min.size = 2)
3 t2 <- Sys.time()
4 print(t2 - t1)
5
6 t1 <- Sys.time()
7 BDmonthRes <- breakout(Z = GSPCsqRets, min.size = 2, beta=0, degree=1)
8 t2 <- Sys.time()
print(t2 - t1)
9
10 ECPmonthRes$estimates
11 BDres$loc
12

With the following results:

1 > ECPmonthRes$estimates
2 [1] 1 285 293 342
3 > BDres$loc
[1] 47 87
4

In short, two changepoints for each. Far from the 20 or so regimes present in Dr. Freys analysis. So, not
close to anything that was expected. My intuition tells me that the main reason for this is that these
algorithms are data-hungry, and there is too little data for them to do much more than what they have done
thus far. So lets go the other way and use daily data.
1 dailySqRets <- Return.calculate(Cl(GSPC))*Return.calculate(Cl(GSPC))
2 dailySqRets <- dailySqRets["1985::"]
3
4 plot(cumsum(dailySqRets))

And heres the new plot:

First, lets try the e-divisive algorithm from the ecp package to find our changepoints, with a minimum size
of 20 days between regimes. (Blog note: this is a process that takes an exceptionally long time. For me, it
took more than 2 hours.)

1 t1 <- Sys.time()
2 ECPres <- e.divisive(X = dailySqRets, min.size=20)
3 t2 <- Sys.time()
print(t2 - t1)
4
1 Time difference of 2.214813 hours

With the following results:

1index(dailySqRets)[ECPres$estimates]
[1] "1985-01-02" "1987-10-14" "1987-11-11" "1998-07-21" "2002-07-01" "2003-07-28" "2008-
109-15" "2008-12-09"
2[9] "2009-06-02" NA

The first and last are merely the endpoints of the data. So essentially, it encapsulates Black Monday and the
crisis, among other things. Lets look at how the algorithm split the volatility regimes. For this, we will use
the xtsExtra package for its plotting functionality (thanks to Ross Bennett for the work he did in
implementing it).

1require(xtsExtra)
plot(cumsum(dailySqRets))
2xtsExtra::addLines(index(dailySqRets)[ECPres$estimates[-c(1,
3length(ECPres$estimates))]], on = 1, col = "blue", lwd = 2)

With the resulting plot:

In this case, the e-divisive algorithm from the ecp package does a pretty great job segmenting the various
volatility regimes, as can be thought of roughly as the slope of the cumulative squared returns. The
algorithms ability to accurately cluster the Black Monday events, along with the financial crisis, shows its
industrial-strength applicability. How does this look on the price graph?

1plot(Cl(GSPC))
xtsExtra::addLines(index(dailySqRets)[ECPres$estimates[-c(1,
2length(ECPres$estimates))]], on = 1, col = "blue", lwd = 2)
In this case, Black Monday is clearly visible, along with the end of the Clinton bull run through the dot-com
bust, the consolidation, the run-up to the crisis, the crisis itself, the consolidation, and the new bull market.

Note that the presence of a new volatility regime may not necessarily signify a market top or bottom, but the
volatility regime detection seems to have worked very well in this case.

For comparison, lets examine the e-divisive with medians algorithm.

1 t1 <- Sys.time()
2 BDres <- breakout(Z = dailySqRets, min.size = 20, beta=0, degree=1)
3 t2 <- Sys.time()
4 print(t2-t1)
5
6 BDres$loc
index(dailySqRets)[BDres$loc]
7

With the following result:

1 Time difference of 2.900167 secs


2 > BDres$loc
3 [1] 5978
4 > BDres$loc
5 [1] 5978
> index(dailySqRets)[BDres$loc]
6 [1] "2008-09-12"
7
So while the algorithm is a lot faster, its volatility regime detection, it only sees the crisis as the one major
change point. Beyond that, to my understanding, the e-divisive with medians algorithm may be too robust
(even without any penalization) against anomalies (after all, the median is robust to changes in 50% of the
data). In short, I think that while it clearly has applications, such as twitter cloud production data, it doesnt
seem to obtain a result thats in the ballpark of two other separate procedures.

Lastly, lets try and create a plot similar to Dr. Freys, with spikes, mountains, and plains.

1
2
3 require(PerformanceAnalytics)
4 GSPCrets <- Return.calculate(Cl(GSPC))
GSPCrets <- GSPCrets["1985::"]
5 GSPCrets$regime <- ECPres$cluster
6 GSPCrets$annVol <- NA
7
8 for(i in unique(ECPres$cluster)) {
9 regime <- GSPCrets[GSPCrets$regime==i,]
annVol <- StdDev.annualized(regime[,1])
1
GSPCrets$annVol[GSPCrets$regime==i,] <- annVol
0 }
11
1 plot(GSPCrets$annVol, ylim=c(0, max(GSPCrets$annVol)), main="GSPC volatility regimes,
2 1985 to 2013-05")
1
3

With the corresponding image, inspired by Dr. Robert Frey:


This concludes the research replication.

********************************

Whew. Done. While I gained some understanding of what change points are useful for, I wont profess to be
an expert on them (some of the math involved uses PhD-level mathematics such as characteristic functions
that I never learned). However, it was definitely interesting pulling together several different ideas and
uniting them under a rigorous process.

Special thanks for this blog post:

Brian Peterson, for the process paper and putting a formal structure to the research replication process (and
requesting this post).
Robert J. Frey, for the volatility landscape idea that I could objectively point to as an objective benchmark
to validate the hypothesis of the paper.
David S. Matteson, for the ecp package.
Nicholas A. James, for the work done in the BreakoutDetection package (and clarifying some of its
functionality for me).
Arun Kejariwal, for the tutorial on using the BreakoutDetection package.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract
or full time roles available for proprietary research that could benefit from my skills, please contact me
through my LinkedIn here.

Introducing Stepwise Correlation Rank


Posted on October 27, 2014 Posted in Asset Allocation, Data Analysis, David Varadi, Portfolio
Management, R Tagged R 7 Comments

So in the last post, I attempted to replicate the Flexible Asset Allocation paper. Id like to offer a thanks to
Pat of Intelligent Trading Tech (not updated recently, hopefully this will change) for helping me corroborate
the results, so that I have more confidence there isnt an error in my code.

One of the procedures the authors of the FAA paper used is a correlation rank, which I interpreted as the
average correlation of each security to the others.

The issue, pointed out to me in a phone conversation I had with David Varadi is that when considering
correlation, shouldnt the correlations the investor is concerned about be between instruments within the
portfolio, as opposed to simply all the correlations, including to instruments not in the portfolio? To that end,
when selecting assets (or possibly features in general), conceptually, it makes more sense to select in a
stepwise fashionthat is, start off at a subset of the correlation matrix, and then rank assets in order of their
correlation to the heretofore selected assets, as opposed to all of them. This was explained in Mr. Varadis
recent post.

Heres a work in progress function I wrote to formally code this idea:

1 stepwiseCorRank <- function(corMatrix, startNames=NULL, stepSize=1,


2 bestHighestRank=FALSE) {
#edge cases
3 if(dim(corMatrix)[1] == 1) {
4 return(corMatrix)
5 } else if (dim(corMatrix)[1] == 2) {
6 ranks <- c(1.5, 1.5)
names(ranks) <- colnames(corMatrix)
7 return(ranks)
}
8
if(is.null(startNames)) {
9 corSums <- rowSums(corMatrix)
1 corRanks <- rank(corSums)
0 startNames <- names(corRanks)[corRanks <= stepSize]
11 }
nameList <- list()
1 nameList[[1]] <- startNames
2 rankList <- list()
1 rankCount <- 1
3 rankList[[1]] <- rep(rankCount, length(startNames))
1 rankedNames <- do.call(c, nameList)
4
while(length(rankedNames) < nrow(corMatrix)) {
1 rankCount <- rankCount+1
5 subsetCor <- corMatrix[, rankedNames]
1 if(class(subsetCor) != "numeric") {
6 subsetCor <- subsetCor[!rownames(corMatrix) %in% rankedNames,]
if(class(subsetCor) != "numeric") {
1
corSums <- rowSums(subsetCor)
7 corSumRank <- rank(corSums)
1 lowestCorNames <- names(corSumRank)[corSumRank <= stepSize]
8 nameList[[rankCount]] <- lowestCorNames
1 rankList[[rankCount]] <- rep(rankCount, min(stepSize,
length(lowestCorNames)))
9 } else { #1 name remaining
2 nameList[[rankCount]] <- rownames(corMatrix)[!rownames(corMatrix) %in%
0 names(subsetCor)]
2 rankList[[rankCount]] <- rankCount
1 }
} else { #first iteration, subset on first name
2 subsetCorRank <- rank(subsetCor)
2 lowestCorNames <- names(subsetCorRank)[subsetCorRank <= stepSize]
2 nameList[[rankCount]] <- lowestCorNames
3 rankList[[rankCount]] <- rep(rankCount, min(stepSize, length(lowestCorNames)))
}
2 rankedNames <- do.call(c, nameList)
4 }
2
5 ranks <- do.call(c, rankList)
2 names(ranks) <- rankedNames
6 if(bestHighestRank) {
ranks <- 1+length(ranks)-ranks
2 }
7 ranks <- ranks[colnames(corMatrix)] #return to original order
2 return(ranks)
8 }
2
9
3
0
3
1
3
2
3
3
3
4
3
5
3
6
3
7
3
8
3
9
4
0
4
1
4
2
4
3
4
4
4
5
4
6
4
7
4
8
4
9
5
0
5
1
5
2
5
3

So the way the function works is that it takes in a correlation matrix, a starting name (if provided), and a step
size (that is, how many assets to select per step, so that the process doesnt become extremely long when
dealing with larger amounts of assets/features). Then, it iteratessubset the correlation matrix on the starting
name, and find the minimum value, and add it to a list of already-selected names. Next, subset the
correlation matrix columns on the selected names, and the rows on the not selected names, and repeat, until
all names have been accounted for. Due to Rs little habit of wiping out labels when a matrix becomes a
vector, I had to write some special case code, which is the reason for two nested if/else statements (the first
one being for the first column subset, and the second being for when theres only one row remaining).

Also, if theres an edge case (1 or 2 securities), then there is some functionality to handle those trivial cases.

Heres a test script I wrote to test this function out:

1 require(PerformanceAnalytics)
require(quantmod)
2
3
#mid 1997 to end of 2012
4 getSymbols(mutualFunds, from="1997-06-30", to="2012-12-31")
5 tmp <- list()
6 for(fund in mutualFunds) {
7
8
9 tmp[[fund]] <- Ad(get(fund))
10 }
11
12 #always use a list hwne intending to cbind/rbind large quantities of objects
adPrices <- do.call(cbind, args = tmp)
13 colnames(adPrices) <- gsub(".Adjusted", "", colnames(adPrices))
14
15 adRets <- Return.calculate(adPrices)
16
17 subset <- adRets["2012"]
18 corMat <- cor(subset)
19
tmp <- list()
20 for(i in 1:length(mutualFunds)) {
21 rankRow <- stepwiseCorRank(corMat, startNames=mutualFunds[i])
22 tmp[[i]] <- rankRow
23 }
24 rankDemo <- do.call(rbind, tmp)
rownames(rankDemo) <- mutualFunds
25 origRank <- rank(rowSums(corMat))
26 rankDemo <- rbind(rankDemo, origRank)
27 rownames(rankDemo)[8] <- "Original (VBMFX)"
28
29 heatmap(-rankDemo, Rowv=NA, Colv=NA, col=heat.colors(8), margins=c(6,6))
30
31

Essentially, using the 2012 year of returns for the 7 FAA mutual funds, I compared how different starting
securities changed the correlation ranking sequence.

Here are the results:

1
VTSMX FDIVX VEIEX VFISX VBMFX QRAAX VGSIX
2 VTSMX 1 6 7 4 2 3 5
3 FDIVX 6 1 7 4 2 5 3
4 VEIEX 6 7 1 4 2 3 5
5 VFISX 2 6 7 1 3 4 5
6 VBMFX 2 6 7 4 1 3 5
QRAAX 5 6 7 4 2 1 3
7 VGSIX 5 6 7 4 2 3 1
8 Non-Sequential 5 6 7 2 1 3 4
9

In short, the algorithm is rather robust to starting security selection, at least judging by this small example.
However, comparing VBMFX start to the non-sequential ranking, we see that VFISX changes from rank 2
in the non-sequential to rank 4, with VTSMX going from rank 5 to rank 2. From an intuitive perspective,
this makes sense, as both VBMFX and VFISX are bond funds, which have a low correlation with the other 5
equity-based mutual funds, but a higher correlation with each other, thus signifying that the algorithm seems
to be working as intended, at least insofar as this small example demonstrates. Heres a heatmap to
demonstrate this in visual form.
The ranking order (starting security) is the vertical axis, and the horizontal are the ranks, from white being
first, to red being last. Notice once again that the ranking orders are robust in general (consider each column
of colors descending), but each particular ranking order is unique.

So far, this code still has to be tested in terms of its applications to portfolio management and asset
allocation, but for those interested in such an idea, its my hope that this provides a good reference point.

Thanks for reading.

Intermission: A Quick Thought on


Robust Kurtosis
Posted on September 10, 2014 Posted in Data Analysis, R Tagged R Leave a comment

This post was inspired by some musings from John Bollinger that as data in the financial world wasnt
normally distributed, that there might be a more robust computation to indicate skewness and kurtosis. For
instance, one way to think about skewness is the difference between mean and median. That is, if the mean
is less than the median, that the distribution was left skewed, and vice versa.

This post attempts to extend that thinking to kurtosis. That is, just as the skew can be thought of as a
relationship between mean and median, so too, might kurtosis be thought of as a relationship between two
measures of spreadstandard deviation and the more robust interquartile range. So, I performed an
experiment to simulate 10000 observations from a standard normal and 10000 observations from a standard
double-exponential distribution.

Heres the experiment I ran.

1
2 set.seed(1234)
norms <- rnorm(10000)
3 dexps <- rexp(10000) * sign(rnorm(10000))
4 plot(density(dexps))
5 lines(density(norms), col="red")
6 (IQR(norms))
7 (IQR(dexps))
(sd(norms))
8 (sd(dexps))
9 (sd(norms)/IQR(norms))
10 (sd(dexps)/IQR(dexps))
11

And heres the output:

1 [1] 0.9757966
> (IQR(norms))
2 [1] 1.330469
3 > (IQR(dexps))
4 [1] 1.35934
5 > (sd(norms))
6 [1] 0.9875294
> (sd(dexps))
7 [1] 1.393057
8 > (sd(norms)/IQR(norms))
9 [1] 0.7422415
10
11 > (sd(dexps)/IQR(dexps))
12 [1] 1.024804
13

That is, in a distribution with higher kurtosis than the standard normal, that the ratio between standard
deviation to interquartile range is higher in a heavier-tailed distribution. Im not certain that this assertion is
true in all general cases, but it seems to make intuitive sense, that with heavier tails, the same amount of
observations are more spread out.

Thanks for reading.

Comparing ATR order sizing to max dollar


order sizing
Posted on August 29, 2014 Posted in Data Analysis, ETFs, QuantStrat, R, Trading Tagged R 2
Comments

First off, it has come to my attention that some readers have trouble getting some of my demos to work
because there may be different versions of TTR in use. If ever your demo doesnt work, the first thing I
would immediately recommend you do is this:

Only run the code through the add.indicator logic. And then, rather than adding the signals and rules, run the
following code:

1 test <- applyIndicators(strategy.st, mktdata=OHLC(XLB))


2 head(test)

That should show you the exact column names of your indicators, and you can adjust your inputs
accordingly.While one of my first posts introduced the ATR order-sizing function, I recently received a
suggestion to test it in the context of whether or not it actually normalized risk across instruments. To keep
things simple, my strategy is as plain vanilla as strategies come RSI2 20/80 filtered on SMA200.

Heres the code for the ATR order sizing version, for completenesss sake.

1 require(IKTrading)
2 require(quantstrat)
require(PerformanceAnalytics)
3
4 initDate="1990-01-01"
5 from="2003-01-01"
6 to="2012-12-31"
7 options(width=70)
8
source("demoData.R")
9
1 #trade sizing and initial equity settings
0 tradeSize <- 100000
11initEq <- tradeSize*length(symbols)
1
2 strategy.st <- portfolio.st <- account.st <- "DollarVsATRos"
1 rm.strat(portfolio.st)
rm.strat(strategy.st)
3 initPortf(portfolio.st, symbols=symbols, initDate=initDate, currency='USD')
1 initAcct(account.st, portfolios=portfolio.st, initDate=initDate,
4 currency='USD',initEq=initEq)
1 initOrders(portfolio.st, initDate=initDate)
strategy(strategy.st, store=TRUE)
5
1
#parameters
6 pctATR=.02
1 period=10
7
1 nRSI <- 2
8 buyThresh <- 20
sellThresh <- 80
1 nSMA <- 200
9
2 add.indicator(strategy.st, name="lagATR",
0 arguments=list(HLC=quote(HLC(mktdata)), n=period),
2 label="atrX")
1
2 add.indicator(strategy.st, name="RSI",
arguments=list(price=quote(Cl(mktdata)), n=nRSI),
2 label="rsi")
2
3 add.indicator(strategy.st, name="SMA",
2 arguments=list(x=quote(Cl(mktdata)), n=nSMA),
4 label="sma")
2
#signals
5 add.signal(strategy.st, name="sigComparison",
2 arguments=list(columns=c("Close", "sma"), relationship="gt"),
6 label="filter")
2
7 add.signal(strategy.st, name="sigThreshold",
2 arguments=list(column="rsi", threshold=buyThresh,
relationship="lt", cross=FALSE),
8 label="rsiLtThresh")
2
9 add.signal(strategy.st, name="sigAND",
3 arguments=list(columns=c("filter", "rsiLtThresh"), cross=TRUE),
0 label="longEntry")
3
1 add.signal(strategy.st, name="sigThreshold",
arguments=list(column="rsi", threshold=sellThresh,
3 relationship="gt", cross=TRUE),
2 label="longExit")
3
3 add.signal(strategy.st, name="sigCrossover",
3 arguments=list(columns=c("Close", "sma"), relationship="lt"),
label="filterExit")
4
3 #rules
5 add.rule(strategy.st, name="ruleSignal",
3 arguments=list(sigcol="longEntry", sigval=TRUE, ordertype="market",
6 orderside="long", replace=FALSE, prefer="Open",
3 osFUN=osDollarATR,
tradeSize=tradeSize, pctATR=pctATR, atrMod="X"),
7 type="enter", path.dep=TRUE)
3
8 add.rule(strategy.st, name="ruleSignal",
3 arguments=list(sigcol="longExit", sigval=TRUE, orderqty="all",
9 ordertype="market",
orderside="long", replace=FALSE, prefer="Open"),
4
type="exit", path.dep=TRUE)
0
4 add.rule(strategy.st, name="ruleSignal",
1 arguments=list(sigcol="filterExit", sigval=TRUE, orderqty="all",
4 ordertype="market",
orderside="long", replace=FALSE, prefer="Open"),
2 type="exit", path.dep=TRUE)
4
#apply strategy
3
t1 <- Sys.time()
4 out <- applyStrategy(strategy=strategy.st,portfolios=portfolio.st)
4 t2 <- Sys.time()
4 print(t2-t1)
5
4 #set up analytics
updatePortf(portfolio.st)
6 dateRange <- time(getPortfolio(portfolio.st)$summary)[-1]
4 updateAcct(portfolio.st,dateRange)
7 updateEndEq(account.st)
4
8
4
9
5
0
5
1
5
2
5
3
5
4
5
5
5
6
5
7
5
8
5
9
6
0
6
1
6
2
6
3
6
4
6
5
6
6
6
7
6
8
6
9
7
0
7
1
7
2
7
3
7
4
7
5
7
6
7
7
7
8
7
9
8
0
8
1
8
2
8
3
8
4
8
5
8
6
8
7
8
8
8
9
9
0
9
1
9
2
9
3
9
4
9
5
Here are some of the usual analytics, which dont interest me in and of themselves as this strategy is rather
throwaway, but to compare them to what happens when I use the max dollar order sizing function in a
moment:

1 > (aggPF <- sum(tStats$Gross.Profits)/-sum(tStats$Gross.Losses))


2 [1] 1.659305
> (aggCorrect <- mean(tStats$Percent.Positive))
3 [1] 69.24967
4 > (numTrades <- sum(tStats$Num.Trades))
5 [1] 3017
6 > (meanAvgWLR <- mean(tStats$Avg.WinLoss.Ratio[tStats$Avg.WinLoss.Ratio < Inf],
na.rm=TRUE))
7 [1] 0.733
8
9 > SharpeRatio.annualized(portfRets)
1 [,1]
0 Annualized Sharpe Ratio (Rf=0%) 0.9783541
11> Return.annualized(portfRets)
[,1]
1 Annualized Return 0.07369592
2 > maxDrawdown(portfRets)
1 [1] 0.08405041
3
1 > round(apply.yearly(dailyRetComparison, Return.cumulative),3)
4 2003-12-31 strategy SPY
0.052 0.066
1 2004-12-31 0.074 0.079
5 2005-12-30 0.045 0.025
1 2006-12-29 0.182 0.132
6 2007-12-31 0.117 0.019
2008-12-31 -0.010 -0.433
1 2009-12-31 0.130 0.192
7 2010-12-31 -0.005 0.110
1 2011-12-30 0.069 -0.028
8 2012-12-31 0.087 0.126
1 > round(apply.yearly(dailyRetComparison, SharpeRatio.annualized),3)
strategy SPY
9 2003-12-31 1.867 3.641
2 2004-12-31 1.020 0.706
0 2005-12-30 0.625 0.238
2 2006-12-29 2.394 1.312
2007-12-31 1.105 0.123
1 2008-12-31 -0.376 -1.050
2 2009-12-31 1.752 0.719
2 2010-12-31 -0.051 0.614
2 2011-12-30 0.859 -0.122
3 2012-12-31 1.201 0.990
> round(apply.yearly(dailyRetComparison, maxDrawdown),3)
2 strategy SPY
4 2003-12-31 0.018 0.025
2 2004-12-31 0.065 0.085
5 2005-12-30 0.053 0.074
2006-12-29 0.074 0.077
2
2007-12-31 0.066 0.102
6 2008-12-31 0.032 0.520
2 2009-12-31 0.045 0.280
7 2010-12-31 0.084 0.167
2 2011-12-30 0.053 0.207
2012-12-31 0.050 0.099
8
2
9
3
0
3
1
3
2
3
3
3
4
3
5
3
6
3
7
3
8
3
9
4
0
4
1
4
2
4
3
4
4
4
5
4
6
4
7
4
8
4
9
5
0
5
1
5
2
5
3
5
4

Now heres a new bit of analyticscomparing annualized standard deviations between securities:

1 > sdQuantile <- quantile(sapply(instRets, sd.annualized))


> sdQuantile
2
0% 25% 50% 75% 100%
3 0.004048235 0.004349390 0.004476377 0.004748530 0.005557765
4 > (extremeRatio <- sdQuantile[5]/sdQuantile[1]-1)
5 100%
0.372886
6
7 > (boxBorderRatio <- sdQuantile[4]/sdQuantile[2]-1)
8 75%
9 0.0917693
10

In short, because the instrument returns are computed as a function of only the initial account equity
(quantstrat doesnt know that Im allocating a notional cash amount to each separate ETFbecause Im
really notI just treat it as one pile of cash that I mentally think of as being divided equally between all 30
ETFs), that means that the returns per instrument also have already implicitly factored in the weighting
scheme from the order sizing function. In this case, the most volatile instrument is about 37% more volatile
than the least and since Im dealing with indices of small nations along with short-term treasury bills in
ETF form, Id say thats impressive.

More impressive, in my opinion, is that the difference in volatility between the 25th and 75th percentile is
about 9%. It means that our ATR order sizing seems to be doing its job.Heres the raw computations in terms
of annualized volatility:

1
2 > sapply(instRets, sd.annualized)
3 EFA.DailyEndEq EPP.DailyEndEq EWA.DailyEndEq EWC.DailyEndEq
4 0.004787248 0.005557765 0.004897699 0.004305728
5 EWG.DailyEndEq EWH.DailyEndEq EWJ.DailyEndEq EWS.DailyEndEq
0.004806879 0.004782505 0.004460708 0.004618460
6 EWT.DailyEndEq EWU.DailyEndEq EWY.DailyEndEq EWZ.DailyEndEq
7 0.004417686 0.004655716 0.004888876 0.004858743
8 EZU.DailyEndEq IEF.DailyEndEq IGE.DailyEndEq IYR.DailyEndEq
9 0.004631333 0.004779468 0.004617250 0.004359273
10 IYZ.DailyEndEq LQD.DailyEndEq RWR.DailyEndEq SHY.DailyEndEq
0.004346095 0.004101408 0.004388131 0.004585389
11 TLT.DailyEndEq XLB.DailyEndEq XLE.DailyEndEq XLF.DailyEndEq
12 0.004392335 0.004319708 0.004515228 0.004426415
13 XLI.DailyEndEq XLK.DailyEndEq XLP.DailyEndEq XLU.DailyEndEq
14 0.004129331 0.004492046 0.004369804 0.004048235
XLV.DailyEndEq XLY.DailyEndEq
15
0.004148445 0.004203503
16
17
And heres a histogram of those same calculations:

In this case, the reason that the extreme computation gives us a 37% greater result is that one security, EPP
(pacific ex-Japan, which for all intents and purposes is emerging markets) is simply out there a bit. The rest
just seem very clumped up.

Now lets remove the ATR order sizing and replace it with a simple osMaxDollar rule, that simply will keep
a position topped off at a notional dollar value. In short, aside from a few possible one-way position
rebalancing transactions (E.G. with the ATR order sizing rule, ATR may have gone up whereas total value of
a position may have gone down, which may trigger the osMaxDollar rule but not the osDollarATR rule on a
second RSI cross) Heres the new entry rule, with the ATR commented out:

1 # add.rule(strategy.st, name="ruleSignal",
2 # arguments=list(sigcol="longEntry", sigval=TRUE, ordertype="market",
3 # orderside="long", replace=FALSE, prefer="Open",
4 osFUN=osDollarATR,
5 # tradeSize=tradeSize, pctATR=pctATR, atrMod="X"),
# type="enter", path.dep=TRUE)
6
7 add.rule(strategy.st, name="ruleSignal",
8 arguments=list(sigcol="longEntry", sigval=TRUE, ordertype="market",
9 orderside="long", replace=FALSE, prefer="Open",
1 osFUN=osMaxDollar,
tradeSize=tradeSize, maxSize=tradeSize),
0 type="enter", path.dep=TRUE)
11

Lets look at the corresponding statistical results:

1 > (aggPF <- sum(tStats$Gross.Profits)/-sum(tStats$Gross.Losses))


[1] 1.635629
2
> (aggCorrect <- mean(tStats$Percent.Positive))
3 [1] 69.45633
4 > (numTrades <- sum(tStats$Num.Trades))
5 [1] 3019
6 > (meanAvgWLR <- mean(tStats$Avg.WinLoss.Ratio[tStats$Avg.WinLoss.Ratio < Inf],
na.rm=TRUE))
7 [1] 0.735
8
9 > SharpeRatio.annualized(portfRets)
1 [,1]
Annualized Sharpe Ratio (Rf=0%) 0.8529713
0
> Return.annualized(portfRets)
11 [,1]
1 Annualized Return 0.04857159
2 > maxDrawdown(portfRets)
1 [1] 0.06682969
>
3 > dailyRetComparison <- cbind(portfRets, SPYrets)
1 > colnames(dailyRetComparison) <- c("strategy", "SPY")
4 > round(apply.yearly(dailyRetComparison, Return.cumulative),3)
1 strategy SPY
5 2003-12-31 0.034 0.066
2004-12-31 0.055 0.079
1 2005-12-30 0.047 0.025
6 2006-12-29 0.090 0.132
1 2007-12-31 0.065 0.019
7 2008-12-31 -0.023 -0.433
2009-12-31 0.141 0.192
1 2010-12-31 -0.010 0.110
8 2011-12-30 0.038 -0.028
1 2012-12-31 0.052 0.126
9 > round(apply.yearly(dailyRetComparison, SharpeRatio.annualized),3)
2 strategy SPY
2003-12-31 1.639 3.641
0 2004-12-31 1.116 0.706
2 2005-12-30 0.985 0.238
1 2006-12-29 1.755 1.312
2 2007-12-31 0.785 0.123
2 2008-12-31 -0.856 -1.050
2009-12-31 1.774 0.719
2 2010-12-31 -0.134 0.614
3 2011-12-30 0.686 -0.122
2 2012-12-31 1.182 0.990
4 > round(apply.yearly(dailyRetComparison, maxDrawdown),3)
strategy SPY
2 2003-12-31 0.015 0.025
5 2004-12-31 0.035 0.085
2 2005-12-30 0.033 0.074
6 2006-12-29 0.058 0.077
2 2007-12-31 0.058 0.102
2008-12-31 0.036 0.520
7 2009-12-31 0.043 0.280
2 2010-12-31 0.062 0.167
8 2011-12-30 0.038 0.207
2 2012-12-31 0.035 0.099
9
3
0
3
1
3
2
3
3
3
4
3
5
3
6
3
7
3
8
3
9
4
0
4
1
4
2
4
3
4
4
4
5
4
6
4
7
4
8
4
9
5
0
5
1
5
2
5
3
5
4
5
5
5
6

And now for the kickerto see just how much riskier using a naive order-sizing method that doesnt take into
account the different idiosyncratic of a security is:

1
2 > sdQuantile <- quantile(sapply(instRets, sd.annualized))
> sdQuantile
3 0% 25% 50% 75% 100%
4 0.0002952884 0.0026934043 0.0032690492 0.0037727970 0.0061480828
5 > (extremeRatio <- sdQuantile[5]/sdQuantile[1]-1)
6 100%
7 19.8206
> (boxBorderRatio <- sdQuantile[4]/sdQuantile[2]-1)
8 75%
9 0.400754
10 > hist(sapply(instRets, sd.annualized))
11
In short, the ratio between the riskiest and least riskiest asset rises from less than 40% to 1900%. But in case,
thats too much of an outlier (E.G. dealing with treasury bill/note/bond ETFs vs. pacific ex-Japan aka
emerging Asia), the difference between the third and first quartiles in terms of volatility ratio has jumped
from 9% to 40%.

Heres the corresponding histogram:

As can be seen, a visibly higher variance in variancesin other words, a second moment on the second
momentmeaning that to not use an order-sizing function that takes into account individual security risk
therefore introduces unnecessary kurtosis and heavier tails into the risk/reward ratio, and due to this
unnecessary excess risk, performance suffers measurably.Here are the individual security annualized
standard deviations for the max dollar order sizing method:

1
2 > sapply(instRets, sd.annualized)
3 EFA.DailyEndEq EPP.DailyEndEq EWA.DailyEndEq EWC.DailyEndEq
4 0.0029895232 0.0037767697 0.0040222015 0.0036137500
5 EWG.DailyEndEq EWH.DailyEndEq EWJ.DailyEndEq EWS.DailyEndEq
0.0037097070 0.0039615376 0.0030398638 0.0037608791
6 EWT.DailyEndEq EWU.DailyEndEq EWY.DailyEndEq EWZ.DailyEndEq
7 0.0041140227 0.0032204771 0.0047719772 0.0061480828
8 EZU.DailyEndEq IEF.DailyEndEq IGE.DailyEndEq IYR.DailyEndEq
9 0.0033176214 0.0013059712 0.0041621776 0.0033752435
10 IYZ.DailyEndEq LQD.DailyEndEq RWR.DailyEndEq SHY.DailyEndEq
0.0026899679 0.0011777797 0.0034789117 0.0002952884
11 TLT.DailyEndEq XLB.DailyEndEq XLE.DailyEndEq XLF.DailyEndEq
12 0.0024854557 0.0034895815 0.0043568967 0.0029546665
13 XLI.DailyEndEq XLK.DailyEndEq XLP.DailyEndEq XLU.DailyEndEq
14 0.0027963302 0.0028882028 0.0021212224 0.0025802850
XLV.DailyEndEq XLY.DailyEndEq
15
0.0020399289 0.0027037138
16
17

Is ATR order sizing the absolute best order-sizing methodology? Most certainly not.In fact, in the
PortfolioAnalytics package (quantstrats syntax was modeled from this), there are ways to explicitly penalize
the higher order moments and co-moments. However, in this case, ATR order sizing works as a simple yet
somewhat effective demonstrator of risk-adjusted order-sizing, while implicitly combating some of the risks
in not paying attention to the higher moments of the distributions of returns, and also still remaining fairly
close to the shore in terms of ease of explanation to those without heavy quantitative backgrounds. This
facilitates marketing to large asset managers that may otherwise be hesitant in investing with a more
complex strategy that they may not so easily understand.

Thanks for reading.

Category Archives: David Varadi


A Closer Update To David Varadis Percentile
Channels Strategy
Posted on February 20, 2015 Posted in Asset Allocation, David Varadi, ETFs, Portfolio Management, R,
Replication Tagged R 1 Comment

So thanks to seeing Michael Kaplers implementation of David Varadis percentile channels strategy, I was
able to get a better understanding of what was going on. It turns out that rather than looking at the channel
value only at the ends of months, that the strategy actually keeps track of the channels value intra-month. So
if in the middle of the month, you had a sell signal and at the end of the month, the price moved up to intra-
channel values, you would still be on a sell signal rather than the previous months end-of-month signal. Its
not much different than my previous implementation when all is said and done (slightly higher Sharpe,
slightly lower returns and drawdowns). In any case, the concept remains the same.

For this implementation, Im going to use the runquantile function from the caTools package, which
contains a function called runquantile that works like a generalized runMedian/runMin/runMax from TTR,
once youre able to give it the proper arguments (on default, its results are questionable).

Heres the code:


1 require(quantmod)
2 require(caTools)
require(PerformanceAnalytics)
3 require(TTR)
4 getSymbols(c("LQD", "DBC", "VTI", "ICF", "SHY"), from="1990-01-01")
5
6 prices <- cbind(Ad(LQD), Ad(DBC), Ad(VTI), Ad(ICF), Ad(SHY))
7 prices <- prices[!is.na(prices[,2]),]
returns <- Return.calculate(prices)
8 cashPrices <- prices[, 5]
9 assetPrices <- prices[, -5]
1
0 require(caTools)
11pctChannelPosition <- function(prices,
1 dayLookback = 60,
lowerPct = .25, upperPct = .75) {
2 leadingNAs <- matrix(nrow=dayLookback-1, ncol=ncol(prices), NA)
1
3 upperChannels <- runquantile(prices, k=dayLookback, probs=upperPct, endrule="trim")
1 upperQ <- xts(rbind(leadingNAs, upperChannels), order.by=index(prices))
4
1 lowerChannels <- runquantile(prices, k=dayLookback, probs=lowerPct, endrule="trim")
5 lowerQ <- xts(rbind(leadingNAs, lowerChannels), order.by=index(prices))
1
positions <- xts(matrix(nrow=nrow(prices), ncol=ncol(prices), NA),
6 order.by=index(prices))
1 positions[prices > upperQ & lag(prices) < upperQ] <- 1 #cross up
7 positions[prices < lowerQ & lag(prices) > lowerQ] <- -1 #cross down
1 positions <- na.locf(positions)
positions[is.na(positions)] <- 0
8
1
colnames(positions) <- colnames(prices)
9 return(positions)
2 }
0
2 #find our positions, add them up
1 d60 <- pctChannelPosition(assetPrices)
2 d120 <- pctChannelPosition(assetPrices, dayLookback = 120)
d180 <- pctChannelPosition(assetPrices, dayLookback = 180)
2 d252 <- pctChannelPosition(assetPrices, dayLookback = 252)
2 compositePosition <- (d60 + d120 + d180 + d252)/4
3
2 compositeMonths <- compositePosition[endpoints(compositePosition, on="months"),]
4
2 returns <- Return.calculate(prices)
monthlySD20 <- xts(sapply(returns[,-5], runSD, n=20), order.by=index(prices))
5 [index(compositeMonths),]
2 weight <- compositeMonths*1/monthlySD20
6 weight <- abs(weight)/rowSums(abs(weight))
2 weight[compositeMonths < 0 | is.na(weight)] <- 0
7 weight$CASH <- 1-rowSums(weight)
2
#not actually equal weight--more like composite weight, going with
8 #Michael Kapler's terminology here
2 ewWeight <- abs(compositeMonths)/rowSums(abs(compositeMonths))
9 ewWeight[compositeMonths < 0 | is.na(ewWeight)] <- 0
3 ewWeight$CASH <- 1-rowSums(ewWeight)
0
3 rpRets <- Return.portfolio(R = returns, weights = weight)
ewRets <- Return.portfolio(R = returns, weights = ewWeight)
1
3
2
3
3
3
4
3
5
3
6
3
7
3
8
3
9
4
0
4
1
4
2
4
3
4
4
4
5
4
6
4
7
4
8
4
9
5
0
5
1
5
2
5
3
5
4
5
5
5
6
5
7
5
8

Essentially, with runquantile, you need to give it the trim argument, and then manually append the leading
NAs, and then manually turn it into an xts object, which is annoying. One would think that the author of this
package would take care of these quality-of-life issues, but no. In any case, there are two strategies at play
hereone being the percentile channel risk parity strategy, and the other what Michael Kapler calls channel
equal weight, which actually *isnt* an equal weight strategy, since the composite parameter values may
take the values (-1, -.5, 0, .5, and 1with a possibility for .75 or .25 early on when some of the lookback
channels still say 0 instead of only 1 or -1), but simply, the weights without taking into account volatility at
all, but Im sticking with Michael Kaplers terminology to be consistent. That said, I dont personally use
Michael Kaplers SIT package due to the vast differences in syntax between it and the usual R code Im used
to. However, your mileage may vary.

In any case, heres the updated performance:

1 both <- cbind(rpRets, ewRets)


2 colnames(both) <- c("RiskParity", "Equal Weight")
3 charts.PerformanceSummary(both)
4 rbind(table.AnnualizedReturns(both), maxDrawdown(both))
apply.yearly(both, Return.cumulative)
5

Which gives us the following output:

1
2 > rbind(table.AnnualizedReturns(both), maxDrawdown(both))
3 RiskParity Equal Weight
4 Annualized Return 0.09380000 0.1021000
5 Annualized Std Dev 0.06320000 0.0851000
6 Annualized Sharpe (Rf=0%) 1.48430000 1.1989000
Worst Drawdown 0.06894391 0.1150246
7
8 > apply.yearly(both, Return.cumulative)
9 RiskParity Equal Weight
10 2006-12-29 0.08352255 0.07678321
11 2007-12-31 0.05412147 0.06475540
2008-12-31 0.10663085 0.12212063
12
2009-12-31 0.11920721 0.19093131
13 2010-12-31 0.13756460 0.14594317
14 2011-12-30 0.11744706 0.08707801
15 2012-12-31 0.07730896 0.06085295
16 2013-12-31 0.06733187 0.08174173
2014-12-31 0.06435030 0.07357458
17 2015-02-17 0.01428705 0.01568372
18
19

In short, the more naive weighting scheme delivers slightly higher returns but pays dearly for those marginal
returns with downside risk.

Here are the equity curves:


So, there you have it. The results David Varadi obtained are legitimate. But nevertheless, I hope this
demonstrates how easy it is for the small details to make material differences.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract
or full time roles available for proprietary research that could benefit from my skills, please contact me
through my LinkedIn here.

An Attempt At Replicating David Varadis


Percentile Channels Strategy
Posted on February 17, 2015 Posted in Asset Allocation, David Varadi, ETFs, Portfolio Management, R,
Replication Tagged R 20 Comments

This post will detail an attempt at replicating David Varadis percentile channels strategy. As Im only able
to obtain data back to mid 2006, the exact statistics will not be identical. However, of the performance I do
have, it is similar (but not identical) to the corresponding performance presented by David Varadi.

First off, before beginning this post, Id like to issue a small mea culpa regarding the last post. It turns out
that Yahoos data, once it gets into single digit dollar prices, is of questionable accuracy, and thus, results
from the late 90s on mutual funds with prices falling into those ranges are questionable, as a result. As I am
an independent blogger, and also make it a policy of readers being able to replicate all of my analysis, I am
constrained by free data sources, and sometimes, the questionable quality of that data may materially affect
results. So, if its one of your strategies replicated on this blog, and you find contention with my results, I
would be more than happy to work with the data used to generate the original results, corroborate the results,
and be certain that any differences in results from using lower-quality, publicly-available data stem from that
alone. Generally, I find it surprising that a company as large as Yahoo can have such gaping data quality
issues in certain aspects, but Im happy that I was able to replicate the general thrust of QTS very closely.

This replication of David Varadis strategy, however, is not one such casemainly because the data for DBC
does not extend back very far (it was in inception only in 2006, and the data used by David Varadis
programmer was obtained from Bloomberg, which I have no access to), and furthermore, Im not certain if
my methods are absolutely identical. Nevertheless, the strategy in and of itself is solid.

The way the strategy works is like this (to my interpretation of David Varadis post and communication with
his other programmer). Given four securities (LQD, DBC, VTI, ICF), and a cash security (SHY), do the
following:

Find the running the n-day quantile of an upper and lower percentile. Anything above the upper percentile
gets a score of 1, anything lower gets a score of -1. Leave the rest as NA (that is, anything between the
bounds).

Subset these quantities on their monthly endpoints. Any value between channels (NA) takes the quantity of
the last value. (In short, na.locf). Any initial NAs become zero.

Do this with a 60-day, 120-day, 180-day, and 252-day setting at 25th and 75th percentiles. Add these four
tables up (their dimensions are the number of monthly endpoints by the number of securities) and divide by
the number of parameter settings (in this case, 4 for 60, 120, 180, 252) to obtain a composite position.

Next, obtain a running 20-day standard deviation of the returns (not prices!), and subset it for the same
indices as the composite positions. Take the inverse of these volatility scores, and multiply it by the
composite positions to get an inverse volatility position. Take its absolute value (some positions may be
negative, remember), and normalize. In the beginning, there may be some zero-across-all-assets positions, or
other NAs due to lack of data (EG if a monthly endpoint occurs before enough data to compute a 20-day
standard deviation, there will be a row of NAs), which will be dealt with. Keep all positions with a positive
composite position (that is, scores of .5 or 1, discard all scores of zero or lower), and reinvest the remainder
into the cash asset (SHY, in our case). Those are the final positions used to generate the returns.

This is how it looks like in code.

This is the code for obtaining the data (from Yahoo finance) and separating it into cash and non-cash data.

1
2 require(quantmod)
require(caTools)
3 require(PerformanceAnalytics)
4 require(TTR)
5 getSymbols(c("LQD", "DBC", "VTI", "ICF", "SHY"), from="1990-01-01")
6
7 prices <- cbind(Ad(LQD), Ad(DBC), Ad(VTI), Ad(ICF), Ad(SHY))
8 prices <- prices[!is.na(prices[,2]),]
returns <- Return.calculate(prices)
9 cashPrices <- prices[, 5]
10 assetPrices <- prices[, -5]
11
This is the function for computing the percentile channel positions for a given parameter setting.
Unfortunately, it is not instantaneous due to Rs rollapply function paying a price in speed for generality.
While the package caTools has a runquantile function, as of the time of this writing, I have found differences
between its output and runMedian in TTR, so Ill have to get in touch with the packages author.

1
2
3
4
5
6 pctChannelPosition <- function(prices, rebal_on=c("months", "quarters"),
dayLookback = 60,
7 lowerPct = .25, upperPct = .75) {
8
9 upperQ <- rollapply(prices, width=dayLookback, quantile, probs=upperPct)
1 lowerQ <- rollapply(prices, width=dayLookback, quantile, probs=lowerPct)
0 positions <- xts(matrix(nrow=nrow(prices), ncol=ncol(prices), NA),
order.by=index(prices))
11
positions[prices > upperQ] <- 1
1 positions[prices < lowerQ] <- -1
2
1 ep <- endpoints(positions, on = rebal_on[1])
3 positions <- positions[ep,]
1 positions <- na.locf(positions)
positions[is.na(positions)] <- 0
4
1
colnames(positions) <- colnames(prices)
5 return(positions)
1 }
6
1
7
1
8

The way this function works is simple: computes a running quantile using rollapply, and then scores
anything with price above its 75th percentile as 1, and anything below the 25th percentile as -1, in
accordance with David Varadis post.

It then subsets these quantities on months (quarters is also possibleor for that matter, other values, but the
spirit of the strategy seems to be months or quarters), and imputes any NAs with the last known observation,
or zero, if it is an initial NA before any position is found. Something I have found over the course of writing
this and the QTS strategy is that one need not bother implementing a looping mechanism to allocate
positions monthly if there isnt a correlation matrix based on daily data involved every month, and it makes
the code more readable.

Next, we find our composite position.

1 #find our positions, add them up


2 d60 <- pctChannelPosition(assetPrices)
3 d120 <- pctChannelPosition(assetPrices, dayLookback = 120)
4 d180 <- pctChannelPosition(assetPrices, dayLookback = 180)
5 d252 <- pctChannelPosition(assetPrices, dayLookback = 252)
compositePosition <- (d60 + d120 + d180 + d252)/4
6

Next, find the running volatility for the assets, and subset them to the same time period (in this case months)
as our composite position. In David Varadis example, the parameter is a 20-day lookback.
1 #find 20-day rolling standard deviations, subset them on identical indices
2 #to the percentile channel monthly positions
3 sd20 <- xts(sapply(returns[,-5], runSD, n=20), order.by=index(assetPrices))
4 monthlySDs <- sd20[index(compositePosition)]

Next, perform the following steps: find the inverse volatility of these quantities, multiply by the composite
position score, take the absolute value, and keep any position for which the composite position is greater
than zero (or technically speaking, has positive signage). Due to some initial NA rows due to a lack of data
(either not enough days to compute a running volatility, or no positive positions yet), those will simply be
imputed to zero. Reinvest the remainder in cash.

1
2 #compute inverse volatilities
3 inverseVols <- 1/monthlySDs
4
5 #multiply inverse volatilities by composite positions
invVolPos <- inverseVols*compositePosition
6
7 #take absolute values of inverse volatility multiplied by position
8 absInvVolPos <- abs(invVolPos)
9
10 #normalize the above quantities
11 normalizedAbsInvVols <- absInvVolPos/rowSums(absInvVolPos)
12
13 #keep only positions with positive composite positions (remove zeroes/negative)
nonCashPos <- normalizedAbsInvVols * sign(compositePosition > 0)
14 nonCashPos[is.na(nonCashPos)] <- 0 #no positions before we have enough data
15
16 #add cash position which is complement of non-cash position
17 finalPos <- nonCashPos
18 finalPos$cashPos <- 1-rowSums(finalPos)
19

And finally, the punchline, how does this strategy perform?

1 #compute returns
2 stratRets <- Return.portfolio(R = returns, weights = finalPos)
3 charts.PerformanceSummary(stratRets)
4 stats <- rbind(table.AnnualizedReturns(stratRets), maxDrawdown(stratRets))
5 rownames(stats)[4] <- "Worst Drawdown"
stats
6

Like this:

1 > stats
2 portfolio.returns
3 Annualized Return 0.10070000
4 Annualized Std Dev 0.06880000
Annualized Sharpe (Rf=0%) 1.46530000
5
Worst Drawdown 0.07449537
6

With the following equity curve:


The statistics are visibly worse than David Varadis 10% vs. 11.1% CAGR, 6.9% annualized standard
deviation vs. 5.72%, 7.45% max drawdown vs. 5.5%, and derived statistics (EG MAR). However, my data
starts far later, and 1995-1996 seemed to be phenomenal for this strategy. Here are the cumulative returns for
the data I have:

1
2 > apply.yearly(stratRets, Return.cumulative)
3 portfolio.returns
2006-12-29 0.11155069
4 2007-12-31 0.07574266
5 2008-12-31 0.16921233
6 2009-12-31 0.14600008
7 2010-12-31 0.12996371
2011-12-30 0.06092018
8
2012-12-31 0.07306617
9 2013-12-31 0.06303612
10 2014-12-31 0.05967415
11 2015-02-13 0.01715446
12

I see a major discrepancy between my returns and Davids returns in 2011, but beyond that, the results seem
to be somewhere close in the pattern of yearly returns. Whether my methodology is incorrect (I think I
followed the procedure to the best of my understanding, but of course, if someone sees a mistake in my
code, please let me know), or whether its the result of using Yahoos questionable quality data, I am
uncertain.
However, in my opinion, that doesnt take away from the validity of the strategy as a whole. With a mid-1
Sharpe ratio on a monthly rebalancing scale, and steady new equity highs, I feel that this is a result worth
sharingeven if not directly corroborated (yet, hopefully).

One last notesome of the readers on David Varadis blog have cried foul due to their inability to come close
to his results. Since Ive come close, I feel that the results are valid, and since Im using different data, my
results are not identical. However, if anyone has questions about my process, feel free to leave questions
and/or comments.

Thanks for reading.

NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract
or full time roles available for proprietary research that could benefit from my skills, please contact me
through my LinkedIn here.

Comparing Flexible and Elastic Asset Allocation


Posted on January 30, 2015 Posted in Asset Allocation, David Varadi, Portfolio Management, R Tagged
R 6 Comments

So recently, I tried to combine Flexible and Elastic Asset Allocation. The operative word beingtried.
Essentially, I saw Flexible Asset Allocation as an incomplete algorithm namely that although it was an
excellent method for selecting securities, that there had to have been a better way to weigh stocks than a
naive equal-weight scheme.

It turns out, the methods outlined in Elastic Asset Allocation werent doing the trick (that is, a four month
cumulative return raised to the return weight multiplied by the correlation to a daily-rebalanced equal-weight
index of the selected securities with cumulative return greater than zero). Either I managed a marginally
higher return at the cost of much higher volatility and protracted drawdown, or I maintained my Sharpe ratio
at the cost of much lower returns. Thus, I scrapped all of it, which was a shame as I was hoping to be able to
combine the two methodologies into one system that would extend the research I previously blogged on.
Instead, after scrapping it, I decided to have a look as to why I was running into the issues I was.

In any case, heres the quick demo I did.

1 require(quantmod)
2 require(PerformanceAnalytics)
require(IKTrading)
3
4 symbols <- c("VTSMX", "FDIVX", "VEIEX", "VBMFX", "VFISX", "VGSIX", "QRAAX")
5
6 getSymbols(symbols, from="1990-01-01")
7 prices <- list()
8 for(i in 1:length(symbols)) {
prices[[i]] <- Ad(get(symbols[i]))
9 }
1 prices <- do.call(cbind, prices)
0 colnames(prices) <- gsub("\\.[A-z]*", "", colnames(prices))
11ep <- endpoints(prices, "months")
1 adPrices <- prices
prices <- prices[ep,]
2 prices <- prices["1997-03::"]
1 adPrices <- adPrices["1997-04::"]
3
1 eaaOffensive <- EAA(monthlyPrices = prices, returnWeights = TRUE, cashAsset =
4 "VBMFX", bestN = 3)
1 eaaOffNoCrash
="VBMFX",
<- EAA(monthlyPrices = prices, returnWeights = TRUE, cashAsset
5
1
6
1
7
1
8
1
9
2
0
2
bestN = 3, enableCrashProtection = FALSE)
1
faa <- FAA(prices = adPrices, riskFreeName = "VBMFX", bestN = 3, returnWeights =
2 TRUE, stepCorRank = TRUE)
2 faaNoStepwise <- FAA(prices = adPrices, riskFreeName = "VBMFX", bestN = 3,
2 returnWeights = TRUE, stepCorRank = FALSE)
3
2 eaaOffDaily <- Return.portfolio(R = Return.calculate(adPrices), weights =
eaaOffensive[[1]])
4 eaaOffNoCrash <- Return.portfolio(R = Return.calculate(adPrices), weights =
2 eaaOffNoCrash[[1]])
5 charts.PerformanceSummary(cbind(faa[[2]], eaaDaily))
2
6 comparison <- cbind(eaaOffDaily, eaaOffNoCrash, faa[[2]], faaNoStepwise[[2]])
2 colnames(comparison) <- c("Offensive EAA", "Offensive EAA (no crash protection)",
"FAA (stepwise)", "FAA (no stepwise)")
7 charts.PerformanceSummary(comparison)
2
8 rbind(table.AnnualizedReturns(comparison), maxDrawdown(comparison))
2
9
3
0
3
1
3
2
3
3
3
4

Essentially, I compared FAA with the stepwise correlation rank algorithm, without it, and the offensive EAA
with and without crash protection. The results were disappointing.

Here are the equity curves:


In short, the best default FAA variant handily outperforms any of the EAA variants.

And here are the statistics:

Offensive EAA Offensive EAA (no crash protection) FAA


(stepwise) FAA (no stepwise)
1Annualized Return 0.1247000 0.1305000
2 0.1380000 0.131400
Annualized Std Dev 0.1225000 0.1446000
30.0967000 0.089500
4Annualized Sharpe (Rf=0%) 1.0187000 0.9021000
51.4271000 1.467900
Worst Drawdown 0.1581859 0.2696754
0.1376124 0.130865

Note of warning: if you run EAA, it seems its unwise to do it without crash protection (aka decreasing your
stake in everything but the cash/risk free asset by a proportion of the number of negative return securities). I
didnt include the defensive variant of EAA since that gives markedly lower returns.

Not that this should discredit EAA, but on a whole, I feel that there should probably be a way to beat the
(usually) equal-weight weighting scheme (sometimes the cash asset gets a larger value due to a negative
momentum asset making it into the top assets by virtue of the rank of its volatility and correlation, and ends
up getting zeroed out) that FAA employs, and that treating FAA as an asset selection mechanism as opposed
to a weighting mechanism may yield some value. However, I have not yet found it myself.

Thanks for reading.


NOTE: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract
or full time roles available for proprietary research that could benefit from my skills, please contact me
through my LinkedIn here.

An Update on Flexible Asset Allocation


Posted on November 25, 2014 Posted in Asset Allocation, David Varadi, Portfolio Management, R
Tagged R 32 Comments

A few weeks back, after seeing my replication, one of the original authors of the Flexible Asset Allocation
paper got in touch with me to tell me to make a slight adjustment to the code, in that rather than remove any
negative-momentum securities before performing any ranking, to perform all ranking without taking
absolute momentum into account, and only removing negative absolute momentum securities at the very
end, after allocating weights.

Heres the new code:

1 FAA <- function(prices, monthLookback = 4,


weightMom = 1, weightVol = .5, weightCor = .5,
2
riskFreeName = NULL, bestN = 3,
3 stepCorRank = FALSE, stepStartMethod = c("best", "default"),
4 geometric = TRUE, ...) {
5 stepStartMethod <- stepStartMethod[1]
6 if(is.null(riskFreeName)) {
prices$zeroes <- 0
7 riskFreeName <- "zeroes"
8 warning("No risk-free security specified. Recommended to use one of:
9 quandClean('CHRIS/CME_US'), SHY, or VFISX.
1 Using vector of zeroes instead.")
0 }
returns <- Return.calculate(prices)
11 monthlyEps <- endpoints(prices, on = "months")
1 riskFreeCol <- grep(riskFreeName, colnames(prices))
2 tmp <- list()
1 dates <- list()
3
for(i in 2:(length(monthlyEps) - monthLookback)) {
1
#subset data
4 priceData <- prices[monthlyEps[i]:monthlyEps[i+monthLookback],]
1 returnsData <- returns[monthlyEps[i]:monthlyEps[i+monthLookback],]
5
1 #perform computations
6 momentum <- data.frame(t(t(priceData[nrow(priceData),])/t(priceData[1,]) - 1))
momentum <- momentum[,!is.na(momentum)]
1 #momentum[is.na(momentum)] <- -1 #set any NA momentum to negative 1 to keep R
7 from crashing
1 priceData <- priceData[,names(momentum)]
8 returnsData <- returnsData[,names(momentum)]
1
9 momRank <- rank(momentum)
vols <- data.frame(StdDev(returnsData))
2 volRank <- rank(-vols)
0 cors <- cor(returnsData, use = "complete.obs")
2 if (stepCorRank) {
1 if(stepStartMethod=="best") {
2 compositeMomVolRanks <- weightMom*momRank + weightVol*volRank
maxRank <-
2 compositeMomVolRanks[compositeMomVolRanks==max(compositeMomVolRanks)]
2 corRank <- stepwiseCorRank(corMatrix=cors, startNames = names(maxRank),
3 bestHighestRank = TRUE, ...)
2
4 } else {
corRank <- stepwiseCorRank(corMatrix=cors, bestHighestRank=TRUE, ...)
2
}
5 } else {
2 corRank <- rank(-rowSums(cors))
6 }
2
7 totalRank <- rank(weightMom*momRank + weightVol*volRank + weightCor*corRank)
2
upper <- length(names(returnsData))
8 lower <- max(upper-bestN+1, 1)
2 topNvals <- sort(totalRank, partial=seq(from=upper, to=lower))[c(upper:lower)]
9
3 #compute weights
0 longs <- totalRank %in% topNvals #invest in ranks length - bestN or higher (in R,
3 rank 1 is lowest)
longs[momentum < 0] <- 0 #in previous algorithm, removed momentums < 0, this
1 time, we zero them out at the end.
3 longs <- longs/sum(longs) #equal weight all candidates
2 longs[longs > 1/bestN] <- 1/bestN #in the event that we have fewer than top N
3 invested into, lower weights to 1/top N
3 names(longs) <- names(totalRank)
3
4
#append removed names (those with momentum < 0)
3 removedZeroes <- rep(0, ncol(returns)-length(longs))
5 names(removedZeroes) <- names(returns)[!names(returns) %in% names(longs)]
3 longs <- c(longs, removedZeroes)
6
3 #reorder to be in the same column order as original returns/prices
7 longs <- data.frame(t(longs))
longs <- longs[, names(returns)]
3
8 #append lists
3 tmp[[i]] <- longs
9 dates[[i]] <- index(returnsData)[nrow(returnsData)]
4 }
0
4 weights <- do.call(rbind, tmp)
dates <- do.call(c, dates)
1 weights <- xts(weights, order.by=as.Date(dates))
4 weights[, riskFreeCol] <- weights[, riskFreeCol] + 1-rowSums(weights)
2 strategyReturns <- Return.rebalancing(R = returns, weights = weights, geometric =
4 geometric)
colnames(strategyReturns) <- paste(monthLookback, weightMom, weightVol, weightCor,
3
sep="_")
4 return(strategyReturns)
4 }
4
5
4
6
4
7
4
8
4
9
5
0
5
1
5
2
5
3
5
4
5
5
5
6
5
7
5
8
5
9
6
0
6
1
6
2
6
3
6
4
6
5
6
6
6
7
6
8
6
9
7
0
7
1
7
2
7
3
7
4
7
5
7
6
7
7
7
8
7
9
8
0
8
1
8
2
8
3
8
4

And here are the new results, both with the original configuration, and using the stepwise correlation ranking
algorithm introduced by David Varadi:

1
2
3 mutualFunds <- c("VTSMX", #Vanguard Total Stock Market Index
4 "FDIVX", #Fidelity Diversified International Fund
"VEIEX", #Vanguard Emerging Markets Stock Index Fund
5 "VFISX", #Vanguard Short-Term Treasury Fund
6 "VBMFX", #Vanguard Total Bond Market Index Fund
7 "QRAAX", #Oppenheimer Commodity Strategy Total Return
8 "VGSIX" #Vanguard REIT Index Fund
9 )
10
#mid 1997 to end of 2012
11 getSymbols(mutualFunds, from="1997-06-30", to="2014-10-30")
12 tmp <- list()
13 for(fund in mutualFunds) {
14 tmp[[fund]] <- Ad(get(fund))
15 }
16
#always use a list hwne intending to cbind/rbind large quantities of objects
17 adPrices <- do.call(cbind, args = tmp)
18 colnames(adPrices) <- gsub(".Adjusted", "", colnames(adPrices))
19
20 original <- FAA(adPrices, riskFreeName="VFISX")
21 swc <- FAA(adPrices, riskFreeName="VFISX", stepCorRank = TRUE)
originalOld <- FAAreturns(adPrices, riskFreeName="VFISX")
22 swcOld <- FAAreturns(adPrices, riskFreeName="VFISX", stepCorRank=TRUE)
23 all4 <- cbind(original, swc, originalOld, swcOld)
24 names(all4) <- c("original", "swc", "origOld", "swcOld")
25 charts.PerformanceSummary(all4)
26
27
1 > rbind(Return.annualized(all4)*100,
2 + maxDrawdown(all4)*100,
3 + SharpeRatio.annualized(all4))
4 original swc origOld swcOld
5 Annualized Return 12.795205 14.135997 13.221775 14.037137
Worst Drawdown 11.361801 11.361801 13.082294 13.082294
6 Annualized Sharpe Ratio (Rf=0%) 1.455302 1.472924 1.377914 1.390025
7

And the resulting equity curve comparison


Overall, it seems filtering on absolute momentum after applying all weightings using only relative
momentum to rank actually improves downside risk profiles ever so slightly compared to removing negative
momentum securities ahead of time. In any case, FAAreturns will be the function that removes negative
momentum securities ahead of time, and FAA will be the ones that removes them after all else is said and
done.

Ill return to the standard volatility trading agenda soon.

Thanks for reading.

Note: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract
or full time roles available for proprietary research that could benefit from my skills, please contact me
through my LinkedIn here.

Predicting High Yield with SPYa Two Part Post


Posted on November 7, 2014 Posted in David Varadi, ETFs, QuantStrat, R, Replication, Trading Tagged
R 9 Comments

This post will cover ideas from two individuals: David Varadi of CSS Analytics with whom I am currently
collaborating on some volatility trading strategies (the extent of which I hope will end up as a workable
trading strategymy current replica of some of VolatilityMadeSimples publicly displayed example
strategies (note, from other blogs, not to be confused with their proprietary strategy) are something that I
think is too risky to be traded as-is), and Cesar Alvarez, of Alvarez Quant Trading. If his name sounds
familiar to some of you, thats because it should. He used to collaborate (still does?) with Larry Connors of
TradingMarkets.com, and Im pretty sure that sometime in the future, Ill cover those strategies as well.

The strategy for this post is simple, and taken from this post from CSS Analytics.
Pretty straightforwardcompute a 20-day SMA on the SPY (I use unadjusted since thats what the data
would have actually been). When the SPYs close crosses above the 20-day SMA, buy the high-yield bond
index, either VWEHX or HYG, and when the converse happens, move to the cash-substitute security, either
VFISX or SHY.

Now, while the above paragraph may make it seem that VWEHX and HYG are perfect substitutes, well,
they arent, as no two instruments are exactly alike, which, as could be noted from my last post, is a detail
that one should be mindful of. Even creating a synthetic equivalent is never exactly perfect. Even though I
try my best to iron out such issues, over the course of generally illustrating an idea, the numbers wont line
up exactly (though hopefully, theyll be close). In any case, its best to leave an asterisk whenever one is
forced to use synthetics for the sake of a prolonged backtest.

The other elephant/gorilla in the room (depending on your preference for metaphorical animals), is whether
or not to use adjusted data. The upside to that is that dividends are taken into account. The *downside* to
that is that the data isnt the real data, and also assumes a continuous reinvestment of dividends.
Unfortunately, shares of a security are not continuous quantitiesthey are discrete quantities made so by their
unit price, so the implicit assumptions in adjusted prices can be optimistic.

For this particular topic, Cesar Alvarez covered it exceptionally well on his blog post, and I highly
recommend readers give that post a read, in addition to following his blog in general. However, just to
illustrate the effect, lets jump into the script.

1
2
getSymbols("VWEHX", from="1950-01-01")
3 getSymbols("SPY", from="1900-01-01")
4 getSymbols("HYG", from="1990-01-01")
5 getSymbols("SHY", from="1990-01-01")
6 getSymbols("VFISX", from="1990-01-01")
7
8 spySma20Cl <- SMA(Cl(SPY), n=20)
clSig <- Cl(SPY) > spySma20Cl
9 clSig <- lag(clSig, 1)
10
11 vwehxCloseRets <- Return.calculate(Cl(VWEHX))
12 vfisxCloseRets <- Return.calculate(Cl(VFISX))
13 vwehxAdjustRets <- Return.calculate(Ad(VWEHX))
vfisxAdjustRets <- Return.calculate(Ad(VFISX))
14
15
hygCloseRets <- Return.calculate(Cl(HYG))
16 shyCloseRets <- Return.calculate(Cl(SHY))
17 hygAdjustRets <- Return.calculate(Ad(HYG))
18 shyAdjustRets <- Return.calculate(Ad(SHY))
19
20 mutualAdRets <- vwehxAdjustRets*clSig + vfisxAdjustRets*(1-clSig)
mutualClRets <- vwehxCloseRets*clSig + vfisxCloseRets*(1-clSig)
21
22 etfAdRets <- hygAdjustRets*clSig + shyAdjustRets*(1-clSig)
23 etfClRets <- hygCloseRets*clSig + shyCloseRets*(1-clSig)
24
25

Here are the results:

1 mutualFundBacktest <- merge(mutualAdRets, mutualClRets, join='inner')


2 charts.PerformanceSummary(mutualFundBacktest)
3 data.frame(t(rbind(Return.annualized(mutualFundBacktest)*100,
4 maxDrawdown(mutualFundBacktest)*100,
SharpeRatio.annualized(mutualFundBacktest))))
5
Which produces the following equity curves:

As can be seen, the choice to adjust or not can be pretty enormous. Here are the corresponding three
statistics:

1 Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0..


2 VWEHX.Adjusted 14.675379 2.954519 3.979383
3 VWEHX.Close 7.794086 4.637520 3.034225

Even without the adjustment, the strategy itself isvery very good, at least from this angle. Lets look at the
ETF variant now.

1 etfBacktest <- merge(etfAdRets, etfClRets, join='inner')


2 charts.PerformanceSummary(etfBacktest)
3 data.frame(t(rbind(Return.annualized(etfBacktest)*100,
4 maxDrawdown(etfBacktest)*100,
SharpeRatio.annualized(etfBacktest))))
5

The resultant equity curve:


With the corresponding statistics:

1 Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0..


2 HYG.Adjusted 11.546005 6.344801 1.4674301
3 HYG.Close 5.530951 9.454754 0.6840059

Again, another stark difference. Lets combine all four variants.

1 fundsAndETFs <- merge(mutualFundBacktest, etfBacktest, join='inner')


2 charts.PerformanceSummary(fundsAndETFs)
3 data.frame(t(rbind(Return.annualized(fundsAndETFs)*100,
4 maxDrawdown(fundsAndETFs)*100,
SharpeRatio.annualized(fundsAndETFs))))
5

The equity curve:


With the resulting statistics:

1 Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0..


2 VWEHX.Adjusted 17.424070 2.787889 4.7521579
3 VWEHX.Close 11.739849 3.169040 3.8715923
4 HYG.Adjusted 11.546005 6.344801 1.4674301
HYG.Close 5.530951 9.454754 0.6840059
5

In short, while the strategy itself seems strong, the particular similar (but not identical) instruments used to
implement the strategy make a large difference. So, when backtesting, make sure to understand what taking
liberties with the data means. In this case, by turning two levers, the Sharpe Ratio varied from less than 1 to
above 4.

Next, Id like to demonstrate a little trick in quantstrat. Although plenty of examples of trading strategies
only derive indicators (along with signals and rules) from the market data itself, there are also many
strategies that incorporate data from outside simply the price action of the particular security at hand. Such
examples would be many SPY strategies that incorporate VIX information, or off-instrument signal
strategies like this one.

The way to incorporate off-instrument information into quantstrat simply requires understanding what the
mktdata object is, which is nothing more than an xts type object. By default, a security may originally have
just the OHLCV and open interest columns. Most demos in the public space generally use data only from the
instruments themselves. However, it is very much possible to actually pre-compute signals.

Heres a continuation of the script to demonstrate, with a demo of the unadjusted HYG leg of this trade:

1 ####### BOILERPLATE FROM HERE


require(quantstrat)
2
3 currency('USD')
4 Sys.setenv(TZ="UTC")
5 symbols <- "HYG"
6 stock(symbols, currency="USD", multiplier=1)
7 initDate="1990-01-01"
8
9 strategy.st <- portfolio.st <- account.st <- "preCalc"
1 rm.strat(portfolio.st)
rm.strat(strategy.st)
0 initPortf(portfolio.st, symbols=symbols, initDate=initDate, currency='USD')
11initAcct(account.st, portfolios=portfolio.st, initDate=initDate, currency='USD')
1 initOrders(portfolio.st, initDate=initDate)
2 strategy(strategy.st, store=TRUE)
######### TO HERE
1
3 clSig <- Cl(SPY) > SMA(Cl(SPY), n=20)
1
4 HYG <- merge(HYG, clSig, join='inner')
1 names(HYG)[7] <- "precomputed_signal"
5
1 #no parameters or indicators--we precalculated our signal
6
1 add.signal(strategy.st, name="sigThreshold",
arguments=list(column="precomputed_signal", threshold=.5,
7 relationship="gt", cross=TRUE),
1 label="longEntry")
8
1
9 add.signal(strategy.st, name="sigThreshold",
2 arguments=list(column="precomputed_signal", threshold=.5,
relationship="lt", cross=TRUE),
0 label="longExit")
2
1 add.rule(strategy.st, name="ruleSignal",
2 arguments=list(sigcol="longEntry", sigval=TRUE, orderqty=1,
2 ordertype="market",
2 orderside="long", replace=FALSE, prefer="Open"),
type="exit", path.dep=TRUE)
3
2 add.rule(strategy.st, name="ruleSignal",
4 arguments=list(sigcol="longExit", sigval=TRUE, orderqty="all",
2 ordertype="market",
5 orderside="long", replace=FALSE, prefer="Open"),
2 type="exit", path.dep=TRUE)
6
#apply strategy
2 t1 <- Sys.time()
7 out <- applyStrategy(strategy=strategy.st,portfolios=portfolio.st)
2 t2 <- Sys.time()
8 print(t2-t1)
2
9 #set up analytics
updatePortf(portfolio.st)
3 dateRange <- time(getPortfolio(portfolio.st)$summary)[-1]
0 updateAcct(portfolio.st,dateRange)
3 updateEndEq(account.st)
1
3
2
3
3
3
4
3
5
3
6
3
7
3
8
3
9
4
0
4
1
4
2
4
3
4
4
4
5
4
6
4
7
4
8
4
9
5
0
5
1
5
2
5
3
5
4
5
5
5
6
5
7

As you can see, no indicators computed from the actual market data, because the strategy used a pre-
computed value to work off of. The lowest-hanging fruit of applying this methodology, of course, would be
to append the VIX index as an indicator for trading strategies on the SPY.

And here are the results, trading a unit quantity:

1 > data.frame(round(t(tradeStats(portfolio.st)[-c(1,2)]),2))
HYG
2
Num.Txns 217.00
3 Num.Trades 106.00
4 Net.Trading.PL 36.76
5 Avg.Trade.PL 0.35
6 Med.Trade.PL 0.01
Largest.Winner 9.83
7 Largest.Loser -2.71
8
9
10
11 Gross.Profits 67.07
Gross.Losses -29.87
12 Std.Dev.Trade.PL 1.67
13 Percent.Positive 50.00
14 Percent.Negative 50.00
15 Profit.Factor 2.25
Avg.Win.Trade 1.27
16 Med.Win.Trade 0.65
17 Avg.Losing.Trade -0.56
18 Med.Losing.Trade -0.39
19 Avg.Daily.PL 0.35
20 Med.Daily.PL 0.01
Std.Dev.Daily.PL 1.67
21 Ann.Sharpe 3.33
22 Max.Drawdown -7.24
23 Profit.To.Max.Draw 5.08
24 Avg.WinLoss.Ratio 2.25
Med.WinLoss.Ratio 1.67
25
Max.Equity 43.78
26 Min.Equity -1.88
27 End.Equity 36.76
28
29
30

And the corresponding position chart:

Lastly, here are the vanguard links for VWEHX and VFISX. Apparently, neither charge a redemption fee.
Im not sure if this means that they can be freely traded in a systematic fashion, however.
In conclusion, hopefully this post showed a potentially viable strategy, understanding the nature of the data
youre working with, and how to pre-compute values in quantstrat.

Thanks for reading.

Note: I am a freelance consultant in quantitative analysis on topics related to this blog. If you have contract
or full time roles available for proprietary research that could benefit from my skills, please contact me
through my LinkedIn here.

Combining FAA and Stepwise Correlation


Posted on October 31, 2014 Posted in Asset Allocation, David Varadi, Portfolio Management, R Tagged
R 18 Comments

Since I debuted the stepwise correlation algorithm, I suppose the punchline that people want to see is: does it
actually work?

The short answer? Yes, it does.

A slightly longer answer: it works, With the caveat that having a better correlation algorithm that makes up
25% of the total sum of weighted ranks only has a marginal (but nevertheless positive) effect on returns.
Furthermore, when comparing a max decorrelation weighting using default single-pass correlation vs.
stepwise, the stepwise gives a bumpier ride, but one with visibly larger returns. Furthermore, for this
universe, the difference between starting at the security ranked highest by the momentum and volatility
components, or with the security that has the smallest aggregate correlation to all securities, is very small.
Essentially, from my inspection, the answer to using stepwise correlation is: its a perfectly viable
alternative, if not better.

Here are the functions used in the script:

1 require(quantmod)
require(PerformanceAnalytics)
2
3
stepwiseCorRank <- function(corMatrix, startNames=NULL, stepSize=1,
4 bestHighestRank=FALSE) {
5 #edge cases
6 if(dim(corMatrix)[1] == 1) {
7 return(corMatrix)
} else if (dim(corMatrix)[1] == 2) {
8 ranks <- c(1.5, 1.5)
9 names(ranks) <- colnames(corMatrix)
10 return(ranks)
11 }
12
13 if(is.null(startNames)) {
corSums <- rowSums(corMatrix)
14 corRanks <- rank(corSums)
15 startNames <- names(corRanks)[corRanks <= stepSize]
16 }
17 nameList <- list()
nameList[[1]] <- startNames
18
rankList <- list()
19 rankCount <- 1
20 rankList[[1]] <- rep(rankCount, length(startNames))
21 rankedNames <- do.call(c, nameList)
22
23 while(length(rankedNames) < nrow(corMatrix)) {
rankCount <- rankCount+1
24 subsetCor <- corMatrix[, rankedNames]
25 if(class(subsetCor) != "numeric") {
subsetCor <- subsetCor[!rownames(corMatrix) %in% rankedNames,]
26
if(class(subsetCor) != "numeric") {
27 corSums <- rowSums(subsetCor)
28 corSumRank <- rank(corSums)
29 lowestCorNames <- names(corSumRank)[corSumRank <= stepSize]
30 nameList[[rankCount]] <- lowestCorNames
rankList[[rankCount]] <- rep(rankCount, min(stepSize,
31 length(lowestCorNames)))
32 } else { #1 name remaining
33 nameList[[rankCount]] <- rownames(corMatrix)[!rownames(corMatrix) %in%
34 names(subsetCor)]
35 rankList[[rankCount]] <- rankCount
}
36 } else { #first iteration, subset on first name
37 subsetCorRank <- rank(subsetCor)
38 lowestCorNames <- names(subsetCorRank)[subsetCorRank <= stepSize]
39 nameList[[rankCount]] <- lowestCorNames
rankList[[rankCount]] <- rep(rankCount, min(stepSize, length(lowestCorNames)))
40 }
41 rankedNames <- do.call(c, nameList)
42 }
43
44 ranks <- do.call(c, rankList)
45 names(ranks) <- rankedNames
if(bestHighestRank) {
46 ranks <- 1+length(ranks)-ranks
47 }
48 ranks <- ranks[colnames(corMatrix)] #return to original order
49 return(ranks)
50 }
51
52
FAAreturns <- function(prices, monthLookback = 4,
53 weightMom=1, weightVol=.5, weightCor=.5,
54 riskFreeName="VFISX", bestN=3,
55 stepCorRank = FALSE, stepStartMethod=c("best", "default")) {
56 stepStartMethod <- stepStartMethod[1]
returns <- Return.calculate(prices)
57 monthlyEps <- endpoints(prices, on = "months")
58 riskFreeCol <- grep(riskFreeName, colnames(prices))
59 tmp <- list()
60 dates <- list()
61
62 for(i in 2:(length(monthlyEps) - monthLookback)) {
63
#subset data
64 priceData <- prices[monthlyEps[i]:monthlyEps[i+monthLookback],]
65 returnsData <- returns[monthlyEps[i]:monthlyEps[i+monthLookback],]
66
67 #perform computations
68 momentum <- data.frame(t(t(priceData[nrow(priceData),])/t(priceData[1,]) - 1))
69 priceData <- priceData[, momentum > 0] #remove securities with momentum < 0
returnsData <- returnsData[, momentum > 0]
70 momentum <- momentum[momentum > 0]
71 names(momentum) <- colnames(returnsData)
72 vol <- as.numeric(-sd.annualized(returnsData))
73
74 if(length(momentum) > 1) {
75
76 #perform ranking
if(!stepCorRank) {
77 sumCors <- -colSums(cor(returnsData, use="complete.obs"))
78 stats <- data.frame(cbind(momentum, vol, sumCors))
79 ranks <- data.frame(apply(stats, 2, rank))
80 weightRankSum <- weightMom*ranks$momentum + weightVol*ranks$vol +
weightCor*ranks$sumCors
81
names(weightRankSum) <- rownames(ranks)
82 } else {
83 corMatrix <- cor(returnsData, use="complete.obs")
84 momRank <- rank(momentum)
85 volRank <- rank(vol)
compositeMomVolRanks <- weightMom*momRank + weightVol*volRank
86 maxRank <-
87 compositeMomVolRanks[compositeMomVolRanks==max(compositeMomVolRanks)]
88 if(stepStartMethod=="default") {
89 stepCorRanks <- stepwiseCorRank(corMatrix=corMatrix, startNames = NULL,
90 stepSize = 1, bestHighestRank = TRUE)
} else {
91 stepCorRanks <- stepwiseCorRank(corMatrix=corMatrix, startNames =
92 names(maxRank),
93 stepSize = 1, bestHighestRank = TRUE)
94 }
weightRankSum <- weightMom*momRank + weightVol*volRank +
95 weightCor*stepCorRanks
96 }
97
98 totalRank <- rank(weightRankSum)
99
10 #find top N values, from http://stackoverflow.com/questions/2453326/fastest-
0 way-to-find-second-third-highest-lowest-value-in-vector-or-column
#thanks to Dr. Rob J. Hyndman
10 upper <- length(names(returnsData))
1 lower <- max(upper-bestN+1, 1)
10 topNvals <- sort(totalRank, partial=seq(from=upper, to=lower))[c(upper:lower)]
2
10 #compute weights
3 longs <- totalRank %in% topNvals #invest in ranks length - bestN or higher (in
R, rank 1 is lowest)
10 longs <- longs/sum(longs) #equal weight all candidates
4 longs[longs > 1/bestN] <- 1/bestN #in the event that we have fewer than top N
10 invested into, lower weights to 1/top N
5 names(longs) <- names(totalRank)
10
6 } else if(length(momentum) == 1) { #only one security had positive momentum
longs <- 1/bestN
10 names(longs) <- names(momentum)
7 } else { #no securities had positive momentum
10 longs <- 1
8 names(longs) <- riskFreeName
}
10
9
#append removed names (those with momentum < 0)
110 removedZeroes <- rep(0, ncol(returns)-length(longs))
111 names(removedZeroes) <- names(returns)[!names(returns) %in% names(longs)]
112 longs <- c(longs, removedZeroes)
113
114 #reorder to be in the same column order as original returns/prices
longs <- data.frame(t(longs))
115
longs <- longs[, names(returns)]
116
117 #append lists
118 tmp[[i]] <- longs
119 dates[[i]] <- index(returnsData)[nrow(returnsData)]
12 }
0
12 weights <- do.call(rbind, tmp)
dates <- do.call(c, dates)
1 weights <- xts(weights, order.by=as.Date(dates))
12 weights[, riskFreeCol] <- weights[, riskFreeCol] + 1-rowSums(weights)
2 strategyReturns <- Return.rebalancing(R = returns, weights = weights, geometric =
FALSE)
12
colnames(strategyReturns) <- paste(monthLookback, weightMom, weightVol, weightCor,
3 sep="_")
12 return(strategyReturns)
4 }
12
5
12
6
12
7
12
8
12
9
13
0
13
1
13
2
13
3
13
4
13
5
13
6
13
7
13
8
13
9
14
0
14
1
14
2
14
3
14
4
14
5
14
6
14
7
14
8
14
9
15
0
15
1
15
2
15
3

The FAAreturns function has been modified to transplant the stepwise correlation algorithm I discussed
earlier. Essentially, the chunk of code that performs the ranking inside the function got a little bit larger, and
some new arguments to the function have been introduced.

First off, theres the option to use the stepwise correlation algorithm in the first placenamely, the
stepCorRank defaulting to FALSE (the default settings replicate the original FAA idea demonstrated in the
first post on this idea). However, the argument that comes next, the stepStartMethod argument does the
following:

Using the default setting, the algorithm will start off using the security that is simply least correlated
among the securities (that is, the lowest sum of correlations among securities). However, the best setting
instead will use the weighted sum of ranks using the prior two factors (momentum and volatility). This
argument defaults to using the best security (aka the one best ranked prior by the previous two factors), as
opposed to the default. At the end of the day, I suppose the best way of illustrating functionality is with some
examples of taking this piece of engineering out for a spin. So here goes!

1 mutualFunds <- c("VTSMX", #Vanguard Total Stock Market Index


"FDIVX", #Fidelity Diversified International Fund
2
"VEIEX", #Vanguard Emerging Markets Stock Index Fund
3 "VFISX", #Vanguard Short-Term Treasury Fund
4 "VBMFX", #Vanguard Total Bond Market Index Fund
5 "QRAAX", #Oppenheimer Commodity Strategy Total Return
6 "VGSIX" #Vanguard REIT Index Fund
)
7
8 #mid 1997 to end of 2012
9 getSymbols(mutualFunds, from="1997-06-30", to="2012-12-31")
1 tmp <- list()
0 for(fund in mutualFunds) {
11 tmp[[fund]] <- Ad(get(fund))
}
1
2 #always use a list hwne intending to cbind/rbind large quantities of objects
1 adPrices <- do.call(cbind, args = tmp)
3 colnames(adPrices) <- gsub(".Adjusted", "", colnames(adPrices))
1
4 original <- FAAreturns(adPrices, stepCorRank=FALSE)
1 originalSWCbest <- FAAreturns(adPrices, stepCorRank=TRUE)
originalSWCdefault <- FAAreturns(adPrices, stepCorRank=TRUE,
5 stepStartMethod="default")
1 stepMaxDecorBest <- FAAreturns(adPrices, weightMom=.05, weightVol=.025,
6 weightCor=1, riskFreeName="VFISX",
1 stepCorRank = TRUE, stepStartMethod="best")
stepMaxDecorDefault <- FAAreturns(adPrices, weightMom=.05, weightVol=.025,
7 weightCor=1, riskFreeName="VFISX",
1 stepCorRank = TRUE, stepStartMethod="default")
8 w311 <- FAAreturns(adPrices, weightMom=3, weightVol=1, weightCor=1, stepCorRank=TRUE)
1 originalMaxDecor <- FAAreturns(adPrices, weightMom=0, weightVol=1, stepCorRank=FALSE)
9 tmp <- cbind(original, originalSWCbest, originalSWCdefault,
stepMaxDecorBest, stepMaxDecorDefault, w311, originalMaxDecor)
2 names(tmp) <- c("original", "originalSWCbest", "originalSWCdefault", "SMDB",
0 "SMDD", "w311", "originalMaxDecor")
2
1
2
2
2
3
2
4
2
5
2
6
2
7
2
8
2
9 charts.PerformanceSummary(tmp, colorset=c("black", "orange", "blue", "purple",
"green", "red", "darkred"))
3
0
3 statsTable <- data.frame(t(rbind(Return.annualized(tmp)*100, maxDrawdown(tmp)*100,
1 SharpeRatio.annualized(tmp))))
3 statsTable$ReturnDrawdownRatio <- statsTable[,1]/statsTable[,2]
2
3
3
3
4
3
5
3
6
3
7
3
8
3
9
4
0

Same seven securities as the original paper, with the following return streams:

Original: the FAA original replication


originalSWCbest: original weights, stepwise correlation algorithm, using the best security as ranked by
momentum and volatility as a starting point.
originalSWCdefault: original weights, stepwise correlation algorithm, using the default (minimum sum of
correlations) security as a starting point.
stepMaxDecorBest: a max decorrelation algorithm that sets the momentum and volatility weights at .05
and .025 respectively, compared to 1 for correlation, simply to get the best starting security through the first
two factors.
stepMaxDecorDefault: analogous to originalSWCdefault, except with the starting security being defined as
the one with minimum sum of correlations.
w311: using a weighting of 3, 1, and 1 on momentum, vol, and correlation, respectively, while using the
stepwise correlation rank algorithm, starting with the best security (the default for the function), since I
suspected that not weighing momentum at 1 or higher was the reason any other equity curves couldnt top
out above the papers original.
originalMaxDecor: max decorrelation using the original 1-pass correlation matrix

Here is the performance chart:

Heres the way I interpret it:

Does David Varadis stepwise correlation ranking algorithm help performance? From this standpoint, the
answers lead to yes. Using the original papers parameters, the performance over the papers backtest period
is marginally better in terms of the equity curves. Comparing max decorrelation algorithms (SMDB and
SMDD stand for stepwise max decorrelation best and default, respectively), the difference is even more
clear.

However, I was wondering why I could never actually outdo the original papers annualized return, and out
of interest, decided to more heavily weigh the momentum ranking than the original paper eventually had it
set at. The result is a bumpier equity curve, but one that has a higher annualized return than any of the
others. Its also something that I didnt try in my walk-forward example (though interested parties can
simply modify the original momentum vector to contain a 1.5 weight, for instance).

Heres the table of statistics for the different permutations:


> statsTable
Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0..
ReturnDrawdownRatio
1original 14.43802 13.15625
21.489724
originalSWCbest
1.097427
14.70544 13.15625
31.421045 1.117753
4originalSWCdefault 14.68145 13.37059
51.457418 1.098041
6SMDB 13.55656 12.33452
1.410072 1.099075
7SMDD 13.18864 11.94587
81.409608 1.104033
9w311 15.76213 13.85615
1.398503 1.137555
originalMaxDecor 11.89159 11.68549
1.434220 1.017637

At the end of the day, all of the permutations exhibit solid results, and fall along different ends of the
risk/return curve. The original settings exhibit the highest Sharpe Ratio (barely), but not the highest
annualized return to max drawdown ratio (which surprisingly, belongs to the setting that overweights
momentum).

To wrap this analysis up (since there are other strategies that I wish to replicate), here is the out-of-sample
performance of these seven strategies (to Oct 30, 2014):
Maybe not the greatest thing in the world considering the S&P has made some spectacular returns in 2013,
but nevertheless, the momentum variant strategies established new equity highs fairly recently, and look to
be on their way up from their latest slight drawdown. Here are the statistics for 2013-2014:

statsTable <- data.frame(t(rbind(Return.annualized(tmp["2013::"])*100,


maxDrawdown(tmp["2013::"])*100, SharpeRatio.annualized(tmp["2013::"]))))
1 statsTable$ReturnDrawdownRatio <- statsTable[,1]/statsTable[,2]
2
> statsTable
3 Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0..
4 ReturnDrawdownRatio
5 original 9.284678 8.259298
6 1.1966581 1.1241485
7 originalSWCbest 8.308246 9.657667
0.9627413 0.8602746
8 originalSWCdefault 8.916144 8.985685
9 1.0861781 0.9922609
1 SMDB 6.406438 9.657667
0 0.8366559 0.6633525
11SMDD
0.7840507
5.641980
0.9435833
5.979313
1 w311 8.921268 9.025100
2 1.0142871 0.9884953
originalMaxDecor 2.888778 6.670709
0.4244202 0.4330542

So, the original parameters are working solidly, the stepwise correlation algorithm seems to be in a slight rut,
and the variants without any emphasis on momentum simply arent that great (they were created purely as
illustrative tools to begin with). Whether you prefer to run FAA with these securities, or with trading
strategies of your own, my only caveat is that transaction costs havent been taken into consideration (from
what I hear, interactive brokers charges you $1 per transaction, so it shouldnt make a world of a difference),
but beyond that, I believe these last four posts have shown that FAA is something that works. While it
doesnt always work perfectly (EG the S&P 500 had a very good 2013), the logic is sound, and the results
are solid, even given some rather plain-vanilla type securities.

In any case, I think Ill conclude with the fact that FAA works, and the stepwise correlation algorithm
provides a viable alternative to computing your weights. Ill update my IKTrading package with some
formal documentation regarding this algorithm soon.

Thanks for reading.

Introducing Stepwise Correlation Rank


Posted on October 27, 2014 Posted in Asset Allocation, Data Analysis, David Varadi, Portfolio
Management, R Tagged R 7 Comments

So in the last post, I attempted to replicate the Flexible Asset Allocation paper. Id like to offer a thanks to
Pat of Intelligent Trading Tech (not updated recently, hopefully this will change) for helping me corroborate
the results, so that I have more confidence there isnt an error in my code.

One of the procedures the authors of the FAA paper used is a correlation rank, which I interpreted as the
average correlation of each security to the others.

The issue, pointed out to me in a phone conversation I had with David Varadi is that when considering
correlation, shouldnt the correlations the investor is concerned about be between instruments within the
portfolio, as opposed to simply all the correlations, including to instruments not in the portfolio? To that end,
when selecting assets (or possibly features in general), conceptually, it makes more sense to select in a
stepwise fashionthat is, start off at a subset of the correlation matrix, and then rank assets in order of their
correlation to the heretofore selected assets, as opposed to all of them. This was explained in Mr. Varadis
recent post.

Heres a work in progress function I wrote to formally code this idea:

1 stepwiseCorRank <- function(corMatrix, startNames=NULL, stepSize=1,


2 bestHighestRank=FALSE) {
#edge cases
3 if(dim(corMatrix)[1] == 1) {
4 return(corMatrix)
5 } else if (dim(corMatrix)[1] == 2) {
6 ranks <- c(1.5, 1.5)
names(ranks) <- colnames(corMatrix)
7 return(ranks)
8 }
9 if(is.null(startNames)) {
1 corSums <- rowSums(corMatrix)
0 corRanks <- rank(corSums)
startNames <- names(corRanks)[corRanks <= stepSize]
11 }
1 nameList <- list()
2 nameList[[1]] <- startNames
1 rankList <- list()
rankCount <- 1
3 rankList[[1]] <- rep(rankCount, length(startNames))
1 rankedNames <- do.call(c, nameList)
4
1 while(length(rankedNames) < nrow(corMatrix)) {
5 rankCount <- rankCount+1
1 subsetCor <- corMatrix[, rankedNames]
if(class(subsetCor) != "numeric") {
6 subsetCor <- subsetCor[!rownames(corMatrix) %in% rankedNames,]
1 if(class(subsetCor) != "numeric") {
7 corSums <- rowSums(subsetCor)
1 corSumRank <- rank(corSums)
8 lowestCorNames <- names(corSumRank)[corSumRank <= stepSize]
nameList[[rankCount]] <- lowestCorNames
1 rankList[[rankCount]] <- rep(rankCount, min(stepSize,
9 length(lowestCorNames)))
2 } else { #1 name remaining
0 nameList[[rankCount]] <- rownames(corMatrix)[!rownames(corMatrix) %in%
names(subsetCor)]
2 rankList[[rankCount]] <- rankCount
1 }
2 } else { #first iteration, subset on first name
2 subsetCorRank <- rank(subsetCor)
2 lowestCorNames <- names(subsetCorRank)[subsetCorRank <= stepSize]
nameList[[rankCount]] <- lowestCorNames
3 rankList[[rankCount]] <- rep(rankCount, min(stepSize, length(lowestCorNames)))
2 }
4 rankedNames <- do.call(c, nameList)
2 }
5
2 ranks <- do.call(c, rankList)
names(ranks) <- rankedNames
6 if(bestHighestRank) {
2 ranks <- 1+length(ranks)-ranks
7 }
2 ranks <- ranks[colnames(corMatrix)] #return to original order
return(ranks)
8
}
2
9
3
0
3
1
3
2
3
3
3
4
3
5
3
6
3
7
3
8
3
9
4
0
4
1
4
2
4
3
4
4
4
5
4
6
4
7
4
8
4
9
5
0
5
1
5
2
5
3

So the way the function works is that it takes in a correlation matrix, a starting name (if provided), and a step
size (that is, how many assets to select per step, so that the process doesnt become extremely long when
dealing with larger amounts of assets/features). Then, it iteratessubset the correlation matrix on the starting
name, and find the minimum value, and add it to a list of already-selected names. Next, subset the
correlation matrix columns on the selected names, and the rows on the not selected names, and repeat, until
all names have been accounted for. Due to Rs little habit of wiping out labels when a matrix becomes a
vector, I had to write some special case code, which is the reason for two nested if/else statements (the first
one being for the first column subset, and the second being for when theres only one row remaining).
Also, if theres an edge case (1 or 2 securities), then there is some functionality to handle those trivial cases.

Heres a test script I wrote to test this function out:

1
2
3 require(PerformanceAnalytics)
4 require(quantmod)
5
6 #mid 1997 to end of 2012
getSymbols(mutualFunds, from="1997-06-30", to="2012-12-31")
7 tmp <- list()
8 for(fund in mutualFunds) {
9 tmp[[fund]] <- Ad(get(fund))
10 }
11
12 #always use a list hwne intending to cbind/rbind large quantities of objects
adPrices <- do.call(cbind, args = tmp)
13 colnames(adPrices) <- gsub(".Adjusted", "", colnames(adPrices))
14
15 adRets <- Return.calculate(adPrices)
16
17 subset <- adRets["2012"]
18 corMat <- cor(subset)
19
tmp <- list()
20 for(i in 1:length(mutualFunds)) {
21 rankRow <- stepwiseCorRank(corMat, startNames=mutualFunds[i])
22 tmp[[i]] <- rankRow
23 }
24 rankDemo <- do.call(rbind, tmp)
rownames(rankDemo) <- mutualFunds
25 origRank <- rank(rowSums(corMat))
26 rankDemo <- rbind(rankDemo, origRank)
27 rownames(rankDemo)[8] <- "Original (VBMFX)"
28
29 heatmap(-rankDemo, Rowv=NA, Colv=NA, col=heat.colors(8), margins=c(6,6))
30
31

Essentially, using the 2012 year of returns for the 7 FAA mutual funds, I compared how different starting
securities changed the correlation ranking sequence.

Here are the results:

1
VTSMX FDIVX VEIEX VFISX VBMFX QRAAX VGSIX
2 VTSMX 1 6 7 4 2 3 5
3 FDIVX 6 1 7 4 2 5 3
4 VEIEX 6 7 1 4 2 3 5
5 VFISX 2 6 7 1 3 4 5
6 VBMFX 2 6 7 4 1 3 5
QRAAX 5 6 7 4 2 1 3
7 VGSIX 5 6 7 4 2 3 1
8 Non-Sequential 5 6 7 2 1 3 4
9

In short, the algorithm is rather robust to starting security selection, at least judging by this small example.
However, comparing VBMFX start to the non-sequential ranking, we see that VFISX changes from rank 2
in the non-sequential to rank 4, with VTSMX going from rank 5 to rank 2. From an intuitive perspective,
this makes sense, as both VBMFX and VFISX are bond funds, which have a low correlation with the other 5
equity-based mutual funds, but a higher correlation with each other, thus signifying that the algorithm seems
to be working as intended, at least insofar as this small example demonstrates. Heres a heatmap to
demonstrate this in visual form.

The ranking order (starting security) is the vertical axis, and the horizontal are the ranks, from white being
first, to red being last. Notice once again that the ranking orders are robust in general (consider each column
of colors descending), but each particular ranking order is unique.

So far, this code still has to be tested in terms of its applications to portfolio management and asset
allocation, but for those interested in such an idea, its my hope that this provides a good reference point.

Thanks for reading.

An Attempt At Replicating Flexible Asset


Allocation (FAA)
Posted on October 20, 2014 Posted in Asset Allocation, David Varadi, Portfolio Management, R Tagged
R 27 Comments

Since the people at Alpha Architect were so kind as to feature my blog in a post, I figured Id investigate an
idea that I first found out about from their sitenamely, flexible asset allocation. Heres the SSRN, and the
corresponding Alpha Architect post.
Heres the script I used for this replication, which is completely self-contained.

1 require(PerformanceAnalytics)
require(quantmod)
2
3
mutualFunds <- c("VTSMX", #Vanguard Total Stock Market Index
4 "FDIVX", #Fidelity Diversified International Fund
5 "VEIEX", #Vanguard Emerging Markets Stock Index Fund
6 "VFISX", #Vanguard Short-Term Treasury Fund
7 "VBMFX", #Vanguard Total Bond Market Index Fund
"QRAAX", #Oppenheimer Commodity Strategy Total Return
8 "VGSIX" #Vanguard REIT Index Fund
9 )
10
11 #mid 1997 to end of 2012
12 getSymbols(mutualFunds, from="1997-06-30", to="2012-12-31")
13 tmp <- list()
for(fund in mutualFunds) {
14 tmp[[fund]] <- Ad(get(fund))
15 }
16
17 #always use a list hwne intending to cbind/rbind large quantities of objects
18 adPrices <- do.call(cbind, args = tmp)
colnames(adPrices) <- gsub(".Adjusted", "", colnames(adPrices))
19
20
FAAreturns <- function(prices, monthLookback = 4,
21 weightMom=1, weightVol=.5, weightCor=.5,
22 riskFreeName="VFISX", bestN=3) {
23
24 returns <- Return.calculate(prices)
25 monthlyEps <- endpoints(prices, on = "months")
riskFreeCol <- grep(riskFreeName, colnames(prices))
26 tmp <- list()
27 dates <- list()
28
29 for(i in 2:(length(monthlyEps) - monthLookback)) {
30
31 #subset data
32 priceData <- prices[monthlyEps[i]:monthlyEps[i+monthLookback],]
returnsData <- returns[monthlyEps[i]:monthlyEps[i+monthLookback],]
33
34 #perform computations
35 momentum <- data.frame(t(t(priceData[nrow(priceData),])/t(priceData[1,]) - 1))
36 priceData <- priceData[, momentum > 0] #remove securities with momentum < 0
37 returnsData <- returnsData[, momentum > 0]
38 momentum <- momentum[momentum > 0]
names(momentum) <- colnames(returnsData)
39
40 vol <- as.numeric(-sd.annualized(returnsData))
41 #sumCors <- -colSums(cor(priceData[endpoints(priceData, on="months")]))
42 sumCors <- -colSums(cor(returnsData, use="complete.obs"))
43 stats <- data.frame(cbind(momentum, vol, sumCors))
44
45 if(nrow(stats) > 1) {
46
#perform ranking
47 ranks <- data.frame(apply(stats, 2, rank))
48 weightRankSum <- weightMom*ranks$momentum + weightVol*ranks$vol +
49 weightCor*ranks$sumCors
50 totalRank <- rank(weightRankSum)
51
52 #find top N values, from http://stackoverflow.com/questions/2453326/fastest-
way-to-find-second-third-highest-lowest-value-in-vector-or-column
53 #thanks to Dr. Rob J. Hyndman
54 upper <- length(names(returnsData))
lower <- max(upper-bestN+1, 1)
55
topNvals <- sort(totalRank, partial=seq(from=upper, to=lower))[c(upper:lower)]
56
57 #compute weights
58 longs <- totalRank %in% topNvals #invest in ranks length - bestN or higher (in
59 R, rank 1 is lowest)
60 longs <- longs/sum(longs) #equal weight all candidates
longs[longs > 1/bestN] <- 1/bestN #in the event that we have fewer than top N
61 invested into, lower weights to 1/top N
62 names(longs) <- rownames(ranks)
63
64 } else if(nrow(stats) == 1) { #only one security had positive momentum
65 longs <- 1/bestN
66 names(longs) <- rownames(stats)
} else { #no securities had positive momentum
67 longs <- 1
68 names(longs) <- riskFreeName
69 }
70
71 #append removed names (those with momentum < 0)
72 removedZeroes <- rep(0, ncol(returns)-length(longs))
names(removedZeroes) <- names(returns)[!names(returns) %in% names(longs)]
73 longs <- c(longs, removedZeroes)
74
75 #reorder to be in the same column order as original returns/prices
76 longs <- data.frame(t(longs))
77 longs <- longs[, names(returns)]
78
79 #append lists
tmp[[i]] <- longs
80 dates[[i]] <- index(returnsData)[nrow(returnsData)]
81 }
82
83 weights <- do.call(rbind, tmp)
84 dates <- do.call(c, dates)
85 weights <- xts(weights, order.by=as.Date(dates))
weights[, riskFreeCol] <- weights[, riskFreeCol] + 1-rowSums(weights)
86 strategyReturns <- Return.rebalancing(R = returns, weights = weights, geometric =
87 FALSE)
88 return(strategyReturns)
89 }
90
replicaAttempt <- FAAreturns(adPrices)
91 bestN4 <- FAAreturns(adPrices, bestN=4)
92 N3vol1cor1 <- FAAreturns(adPrices, weightVol = 1, weightCor = 1)
93 minRisk <- FAAreturns(adPrices, weightMom = 0, weightVol=1, weightCor=1)
94 pureMomentum <- FAAreturns(adPrices, weightMom=1, weightVol=0, weightCor=0)
95 maxDecor <- FAAreturns(adPrices, weightMom=0, weightVol=0, weightCor=1)
momDecor <- FAAreturns(adPrices, weightMom=1, weightVol=0, weightCor=1)
96
97 all <- cbind(replicaAttempt, bestN4, N3vol1cor1, minRisk, pureMomentum, maxDecor,
98 momDecor)
99 colnames(all) <- c("Replica Attempt", "N4", "vol_1_cor_1", "minRisk",
10 "pureMomentum", "maxDecor", "momDecor")
charts.PerformanceSummary(all, colorset=c("black", "red", "blue", "green",
0
"darkgrey", "purple", "orange"))
10
1 stats <- data.frame(t(rbind(Return.annualized(all)*100,
10 maxDrawdown(all)*100,
2 SharpeRatio.annualized(all))))
10 stats$Return_To_Drawdown <- stats[,1]/stats[,2]
3
10
4
10
5
10
6
10
7
10
8
10
9
110
111
112
113
114
115
116

Heres the formal procedure:

Using the monthly endpoint functionality in R, every month, looking over the past four months, I computed
momentum as the most recent price over the first price in the observed set (that is, the price four months
ago) minus one, and instantly removed any funds with a momentum less than zero (this was a suggestion
from Mr. David Varadi of CSS Analytics, with whom Ill be collaborating in the near future). Next, with the
pared down universe, I ranked the funds by momentum, by annualized volatility (the results are identical
with just standard deviation), and by the sum of the correlations with each other. Since volatility and
correlation are worse at higher values, I multiplied each by negative one. Next, I invested in the top N funds
every period, or if there were fewer than N funds with positive momentum, each remaining fund received a
weight of 1/N, with the rest eventually being placed into the risk-free asset, in this case, VFISX. All price
and return data were daily adjusted (as per the SSRN paper) data.

However, my results do not match the papers (or Alpha Architects) in that I dont see the annualized
returns breaking 20%, nor, most importantly, do I see the single-digit drawdowns. I hope my code is clear
for every step as to what the discrepancy may be, but that aside, let me explain what the idea is.

The idea is, from those that are familiar with trend following, that in addition to seeking return through the
momentum anomaly (stacks of literature available on the simple idea that what goes up will keep going up
to an extent), that there is also a place for risk management. This comes in the form of ranking correlation
and volatility, and giving different weights to each individual component rank (that is, momentum has a
weight of 1, correlation .5, and volatility .5). Next, the weighted sum of the ranks is then also ranked (so two
layers of ranking) for a final aggregate rank.

Unfortunately, when it comes to the implementation, the code has to be cluttered with some data munging
and edge-case checking, which takes a little bit away from the readability. To hammer a slight technical
tangent home, in R, whenever one plans on doing iterated appending (E.G. one table thats repeatedly
appended), due to R copying an object on assignment when doing repeated rbinding or cbinding, but simply
appending the last iteration onto a list object, outside of tiny data frames, its always better to use a list and
only call rbind/cbind once at the end. The upside to data frames is that theyre much easier to print out to a
console and to do vectorized operations on. However, lists are more efficient when it comes to iteration.

In any case, heres an examination of some variations of this strategy.

The first is a simple attempt at replication (3 of 7 securities, 1 weight to momentum, .5 to volatility and
correlation each). The second is that same setting, just with the top four securities instead of the top three. A
third one is with three securities, but double the weighting to the two risk metrics (vol & cor). The next
several are conceptual permutationsa risk minimization profile that puts no weight on the actual nature of
momentum (analogous to what the Smart Beta folks would call min-vol), a pure momentum strategy
(disregard vol and cor), a max decorrelation strategy (all weight on correlation), and finally, a hybrid of
momentum and max decorrelation.

Here is the performance chart:

Overall, this looks like evidence of robustness, given that I fundamentally changed the nature of the
strategies in quite a few cases, rather than simply tweaked the weights here or there. The
momentum/decorrelation hybrid is a bit difficult to see, so heres a clearer image for how it compared with
the original strategy.
Overall, a slightly smoother ride, though slightly lower in terms of returns. Heres the table comparing all
seven variations:

> stats
Annualized.Return Worst.Drawdown Annualized.Sharpe.Ratio..Rf.0..
Return_To_Drawdown
1Replica Attempt 14.43802 13.156252
1.489724 1.0974268
2N4 12.48541 10.212778
31.492447 1.2225281
4vol_1_cor_1 12.86459 12.254390
51.608721 1.0497944
6minRisk 11.26158 9.223409
1.504654 1.2209786
7pureMomentum 13.88501 14.401121
81.135252 0.9641619
9maxDecor 11.89159 11.685492
1.434220 1.0176368
momDecor 14.03615 10.951574
1.489358 1.2816563

Overall, there doesnt seem to be any objectively best variant, though pure momentum is definitely the worst
(as may be expected, otherwise the original paper wouldnt be as meaningful). If one is looking for return to
max drawdown, then the momentum/max decorrelation hybrid stands out, though the 4-security variant and
minimum risk variants also work (though theyd have to be leveraged a tiny bit to get the annualized returns
to the same spot). On Sharpe Ratio, the variant with double the original weighting on volatility and
correlation stands out, though its return to drawdown ratio isnt the greatest.
However, the one aspect that I take away from this endeavor is that the number of assets were relatively tiny,
and the following statistic:

> SharpeRatio.annualized(Return.calculate(adPrices))
1 VTSMX FDIVX VEIEX VFISX VBMFX
2QRAAX VGSIX
3Annualized Sharpe Ratio (Rf=0%) 0.2520994 0.3569858 0.2829207 1.794041 1.357554
-0.01184516 0.3062336

Aside from the two bond market funds, which are notorious for lower returns for lower risk, the Sharpe
ratios of the individual securities are far below 1. The strategy itself, on the other hand, has very respectable
Sharpe ratios, working with some rather sub-par components.

Simply put, consider running this asset allocation heuristic on your own set of strategies, as opposed to pre-
set funds. Furthermore, it is highly likely that the actual details of the ranking algorithm can be improved,
from different ranking metrics (add drawdown?) to more novel concepts such as stepwise correlation
ranking/selection.

Thanks for reading.

You might also like