You are on page 1of 17

List of Econometrics Honours Research Projects for 2016

ETC4860
Department of Econometrics and Business Statistics
Chief Examiner: Professor Xueyan Zhao

Prof. Heather Anderson


Fiscal Policy in Good Times and Bad
Several researchers have found evidence that fiscal policy in the USA can have a stronger effect
on output during recessions than during expansions. This project asks you to re-examine this
issue using data relating to Australia or another economy.
This topic would suit a student who is interested in macroeconomics, and who is willing to learn
some programming. The students background should include some macroeconomics and some
multivariate time series analysis (such as that in ECT 3450). Concurrent study in ECT4410 is
also highly recommended.
References:
Perrotti (1999). Fiscal policy in good times and bad Quarterly Journal of Economics, 114,
1399 1436.
Owyang, M. T., V. Ramey, and S. Zubairy, 2013, Are government spending multipliers greater
during periods of slack? Evidence from twentieth century historical data, American Economic
Review, 103, 129 134.
Tersvirta, T. and Y. Yang, Specification, estimation and evaluation of vector smooth
transition autoregressive models with applications", University of Aarhus, CREATES Research
Papers 2014-08.

Dr Michael Callan
Annuities: Are those irrational customers behaving rationally
In some jurisdictions, it is compulsory to swap your pension savings into a sequence of
payments that insure your longevity risk. Where compulsion does not exist, the evidence is that
the majority of potential customers choose to retain the cash. The academic literature and many
actuaries have often argued that people are behaving irrationally. This project will examine the
possibility that it is not in the best interests to effect the annuity and hence customers are
behaving rationally.

References:
Brown, J. (2007) Rational and Behavioural Perspectives on the Role of Annuities in Retirement
Planning. NBER Working Paper No. 13537 http://www.nber.org/papers/w13537
Yaari, M (1965) Uncertain Lifetime, Life Insurance and the Theory of the Consumer, Review
of Economic Studies. 32(2): 137-50

Prof. Lisa Cameron


An examination of determinants of income inequality and/or income mobility.
There is serious concern in a number of countries about the rate at which inequality is increasing
in the country and how this is creating a gulf between the affluent middle class and other
citizens. A range of econometric techniques exist which allow one to identify factors that are
driving increases in inequality, for example the techniques of Di Nardo, Fortin and
Lemieux(1996), as applied in Cameron (2000), and Morduch and Sicular (2002). This topic
would involve a survey of recent techniques for decomposing inequality and the application of
at least one of these techniques to data from at least two years of household data. The paper
could also or alternatively examine income mobility (the extent to which peoples income
changes over time). These techniques could be applied to any country, depending on the
availability of data. (For example, students could select a country for which there are at least
two years of data from the World Banks Living Standards Measurement Surveys (LSMS) or
use the Indonesian socio-economic survey which is available at Monash.)
References:
Cameron, L. (2000), Income Inequality in Java: Relating the Increases to the Changing Age,
Education and Industrial Structure, Journal of Development Economics, 2000, 62, 149-180.
DiNardo, John & Fortin, Nicole M & Lemieux, Thomas, 1996. "Labor Market Institutions and
the Distribution of Wages, 1973-1992: A Semiparametric Approach,"Econometrica, 64(5),
pages 1001-44, September.
Morduch, J. and T. Sicular (2002), Rethinking Inequality Decomposition, with Evidence from
Rural China, Economic Journal, 112 (476), Jan, 93-106.

Prof. Duangkamon Chotikapanich


Constructing a panel of annual measures of inequality and poverty for selected countries
in Asian
Tracking changes in inequality, welfare, poverty and pro-poor growth on national, regional, and
global levels requires income distribution data for as many countries as possible, as frequently
as possible. Available data are limited, however, appearing only every five years for some
countries and less often for many other countries. The aim of this project is to use interpolation
2

method to estimate annual income distributions for some selected Asian countries for the period
from 1995 to 2009. These annual income distributions will be used to track the changes in
inequality, economic welfare, poverty, and pro-poor growth, at national, regional levels. The
countries of interest include Indonesia, Malaysia, Philippines, Thailand and Vietnam.
References
Leigh, A. (2004), Deriving Long-Run Inequality Series from Tax Data, Economic Record,
81, S58-S70.

Prof. Di Cook
How do people read displays of temporal data in business analytics
Time series of economic data are commonly displayed in one of a few ways. The classical
example is the line chart, with consecutive measurements connected by lines. A horizon graph
cuts a series and mirrors the two halves using color to indicate ups or downs. A streamgraph or
themeriver plot shows multiple series stacked and centered. A candlestick chart is used to show
stock highs and lows, open and closing price over time. Each of the recent charts has been
developed for particular purposes, eg the candlestick chart monitors stock price movement, and
the themeriver illustrates product segmentation. We will generate several graphics and
associated tasks and use eye-tracking to observe how people look at the charts. This project will
involve using the software R, for generating plots and for analysing eye-tracking data.
Zhao, Y., Cook, D., Hofmann, H., Majumder, M., Roy Chowdhury, N. (2014) Mind Reading:
Using An Eye-tracker To See How People Are Looking At Lineups, International Journal of
Intelligent
Technologies
and
Applied
Statistics,
6(4):393--413.
http://www.airitilibrary.com/Publication/alDetailedMesh?DocID=19985010-201312201401060028-201401060028-393-413
Javed, W., McDonnel, B., and Elmqvist, N. (2010). Graphical Perception of Multiple Time
Series. IEEE Transactions on Visualization and Computer Graphics, 16(6):927934,
http://www.computer.org/csdl/trans/tg/2010/06/ttg2010060927-abs.html.
Hofmann, H., Follett, L., Majumder, M. and Cook, D. (2012) Graphical Tests for Power
Comparison of Competing Designs, IEEE Transactions on Visualization and Computer
Graphics, 18(12):2441--2448, http://doi.ieeecomputersociety.org/10.1109/TVCG.2012.230.

Prof. Di Cook
What can we learn about life in the city by mining the Melbournes Open Data Platform?
The city of Melbournes open data platform (https://data.melbourne.vic.gov.au) has some
fabulous data sets on different aspects of the city, some collected by sensors, some by electronic
usage. There is the Melbourne bike share which contains up to the minute bike counts at each
of the racks. There is also up to the minute pedestrian sensor data for various sites around the
city. In addition there is data on energy use, parking, property leasing, ... The goal of this project
3

will be to decide on several interesting aspects of Melbourne to study, extract the data, and use
business analytics methods to visualise and model the data to answer the main questions. The
project will require using R for the data wrangling, visualisation and analysis.
Hobbs, J., Wickham, H., Hofmann, H. and Cook, D. (2010) Glaciers Melt as Mountains Warm:
A Graphical Case Study, Computational Statistics, 25(4):569--586.
Fostveldt, L., Shum, A., Lyttle, I. and Cook, D. (2016) What Does the Data Collected During
PISA Testing of Teenagers Tell Us About Education Around the World? Chance
(https://github.com/ijlyttle/isu_pisa/tree/master/paper )
Hofmann , H. Cook , D., Kielion, C., Schloerke, B., Hobbs, J., Loy, A., Mosley, L, Rockoff ,
D., Huang, Y., Wrolstad, D. and Yin, T. (2011) Delayed, Canceled, on Time, Boarding
Flying in the USA Journal of Computational and Graphical Statistics 20(2).
Wickham, H., Swayne, D. and Poole, D. (2009) Bay Area Blues: The Effect of the Housing
Crisis, In Beautiful Data, available at http://vita.had.co.nz/papers/bay-area-blues.pdf
Wickham, H. (2011) The Split-Apply-Combine Strategy for Data Analysis, Journal of
Statistical Software, 40(1): http://www.jstatsoft.org/article/view/v040i01
Grolemund, G, Wickham, H (2011) Dates and times made easy with lubridate, Journal of
Statistical Software, 40(3):http://www.jstatsoft.org/article/view/v040i03/
https://cran.r-project.org/web/packages/tidyr/vignettes/tidy-data.html

Associate Prof. Catherine Forbes


The use and abuse of DieboldMariano tests
Diebold and Mariano (1995) proposed a simple classical test (called the DM test) to evaluate
whether two sets of model free forecasts are different from each other. This test has become
widely used, though not always in the way in which it was originally proposed. (The impact of
the test was so great, in 2002 it was one of only ten articles republished by the Journal of
Economic and Business Statistics as part of its 20 year anniversary issue.)
However, recently, Diebold (2015) reviewed the DM testing methodology and its use in
practice, and offered many insights regarding not only the methodology as originally conceive
but also as it has been used since. Given the importance given to the evaluation of forecasts
these days, I think it would be worthwhile for a student to dig deeper into the "use and abuse"
of the DM test.
As a starting point, this project would involve the student reviewing the literature to seek out a
few examples (could be from economics, finance, or another area) where the Diebold and
Mariano (1995) procedure has been employed. Then, using at least one empirically relevant
example, the issues outlined in the recent Diebold (2015) paper would be explored to provide
some context for the insights described therein. The interested student could go on to explore
one of many possible further directions.

References
F. X. Diebold and R. S. Mariano (1995) Comparing Predictive Accuracy, Journal of Business
and Economic Statistics, Vol 13:3, 253263.
F. X. Diebold (2015) Comparing Predictive Accuracy, Twenty Years Later: A Personal
Perspective on the Use and Abuse of DieboldMariano Tests, Journal of Business & Economic
Statistics, Vol 33:1, 1-9.

Dr David Frazier
Adding Summary Statistics within Indirect Inference
Indirect Inference, and other simulation based econometric procedures, are now standard
methods for estimating parameters in models where the likelihood function is intractable.
Broadly speaking, simulation based procedures estimate parameters by matching a vector of
observed statistics = ( 0 ) (usually means or variances) to a vector of statistics simulated
under the structural model, (). The parameter value that minimizes the distance between the
observed and simulated statistics, (( 0 ), ()), is taken to be the parameter estimate. Such
a procedure is extremely helpful when the likelihood and/or moments from the structural model
of interest can not be obtained in closed form, which is true for many models in finance and
economics.
The validity of simulation based econometric procedures rests on the satisfaction of an
identification condition that requires the existence of a one-to-one relationship between the
observed and simulated statistics, ( 0 ) and (); i.e., ()=() if and only if = 0 . It is
now well-known that increasing the dimension of ( 0 ) will increase the efficiency of
simulation based estimators, yielding parameter estimates that near the efficiency of Maximum
Likelihood. However, to date, no research has been conducted on the affect these additional
dimensions have on the satisfaction of the identification condition that lays at the heart of most
simulation based estimation procedures. The goal of this research is to analyse the impact of
adding additional summary statistics on this key condition. This research will explore this idea
within a few simple examples: a moving average model of order (2), MA(2), and a simple
stochastic volatility.
Dr David Frazier and Dr Anastasios Panagiotelis
Learning and Forecasting R package downloads.
The open source statistical software package R has become increasingly popular as a multifaceted platform for data analysis. One advantage of R is the users ability to download tailor
made software add-ons called packages to aid in her statistical analysis. Recently the largest
repository of R packages, known as CRAN, has made their download logs publicly available
and easily obtainable using R itself.
The aim of this project is to develop methodologies for forecasting the number of daily
downloads for some widely used R packages using a mix of machine learning and multivariate
5

forecasting approaches. No package is an island, often two packages will be substitutes for one
another, but on the other hand many packages will only work if another package known as a
dependency has already been installed. This facet of R induces dynamic dependencies
between packages and an additional goal of the project will be to model this structure. Other
interesting challenges that could arise while modelling the data are structural breaks, zero
inflation (a large number of 0 values in some series) and the presence of outliers.
Although the focus will be on R, it should be noted that R is just a single project that follows
an open-source software model. According to a recent report by Black Duck Software, 78% of
companies use open source software and for 66% of companies open-source is the default
choice. As such the models and methods developed in this project have the potential to be
broadly applicable in the IT industry.
This project will require extensive use of R and knowledge of forecasting and business analytics
concepts covered in ETC2450 and ETC3250. Example concepts we will use to conduct this
analysis are random forests, k-means clustering, and ARIMA forecasting. If you are interested
in this project, and answering the pressing" question, using tools easily available in R, can
we accurately forecast downloads of the R forecasting package?," this is the project for you!
Prof. Brett Inder
A Spatial Economic Model for Timor-Leste
Timor-Leste ranks as the poorest nation in the Asia-Pacific region. The aim of this project is to
build a comprehensive regional economic and social model of Timor-Leste, which fulfils two
purposes:
1. Document the state of economic activity, economic and social infrastructure and social
outcomes as reported in major surveys and census data
2. Demonstrate the linkages between economic activity and social outcomes through a series of
behavioural models
The model can then be used as a policy and monitoring tool for initiatives to facilitate economic
and social development in Timor-Leste. The project will require developing skills in merging
and sorting large data sets, presenting data using mapping software, and then some econometric
modelling to identify the behavioural relationships that underlie the linkages.
Reference;
Running the Numbers; A practical guide to regional economic and social analysis, John
Quinterno.
Dr Hsein Kew
Efficient inference for autoregressive models under time -varying variances

In autoregressive models, Xu and Phillips (2008) have shown that the usual OLS estimator is
inefficient relative to the infeasible GLS estimator in the presence of unconditional
heteroscedasticity. They also propose an adaptive estimator which delivers the same limit
distribution as the infeasible GLS estimator. Utilising this adaptive estimator, describe how you
would construct a standard Wald statistic for the autoregressive coefficients. Using the GAUSS
programming language, conduct Monte Carlo experiments to compare your new Wald test with
tests based on the OLS estimator considered in Phillips and Xu (2006). Compute and compare
the local power of these tests.
References :
Phillips, P. C., & Xu, K. L. (2006). Inference in autoregression under heteroskedasticity. Journal
of Time Series Analysis, 27(2), 289-308.
Xu, K. L., & Phillips, P. C. (2008). Adaptive estimation of autoregressive models with timevarying variances. Journal of Econometrics, 142(1), 265-280.

Dr Jun Sung Kim


Title: What Determines the Locations of Retail Chain Stores in Australia?
This project studies the location choice problem in the Australian retail industry. Focusing on
the observed entry decisions of large retail chains such as K-mart, Target, Coles, Woolworths,
etc, this project will investigate important factors in their location choices, especially, the degree
of complementarities of having different chains of a conglomerate at a close area, and the
competition effects of having similar types of stores at a close area. A counterfactual analyis
can be performed to see potential policy implications. The estimation will use the
semiparametric (smoothed) maximum score estimator.
References
Bajari and Fox (2013), "Measuring the Efficiency of an FCC Spectrum Auction." American
Economic Journal: Microeconomics, 5(1): 100-146
Ellickson, Houghton, and Timmins (2013), Estimating Network Economies in Retail Chains:
a Revealed Preference Approach. RAND Journal of Economics, 44(2): 169-193

Dr Bonsoo Koo
Can financial ratios predict excess returns of stocks?
The financial economics literature has sought for the answer to what drives stock prices. A
considerable amount of financial literature has investigated the predictability of stock returns.
See Spiegel (2008) and Koijen and Van Nieuweburgh (2011) for most recent survey on the
literature. Ever since empirical evidence of predictability of stock returns, an array of predictive
7

variables including financial ratios have been proposed but there is no clear-cut evidence
whether those variables are indeed successful predictors. This project attempts to find evidence
(whether favourable or infavourable) to the return predictability of an array of financial ratios
and compare results in the US and Australian stock markets via the state-of-the-art LASSO
method.
In this task, the student is expected to review the literature on return predictability with financial
ratios and a variety of LASSO methods. Then, she is expected to apply methodologies related
to return predictability to actual data in order to evaluate the predictability of stock returns by
various financial ratios. This task entails regression methods, asset pricing, and time series
forecasting. Application will be done in either MATLAB or R-project.
Prerequisite: ETC3400 and ETC3460. Note that you might need to develop some programming
skills in MATLAB or R-project.
References
Koijen, S.J.R. and S. Van Nieuwerburgh (2011) Predictability of Returns and Cash Flows.
Annual Review of Financial Economics 3, 467-491
Spiegel, M. (2008) Forecasting the equity premium: where we stand today. The Review of
Financial Studies 21, 1453-1454

A/Prof. Paul Lajbcygier


HIGH FREQUENCY TRADING
In most equity markets more than 50% of all trades are algorithmic or high frequency trading
(HFT). The proliferation of algorithmic trading has changed the trading landscape and the
impact of their activities is at the core of many regulatory and financial discussions. Limited
data and fragmented markets have meant that these discussions have not been well informed.
We are able to accurately and unambiguously attribute profits to HFTs. The major contributions
of this work is to include the isolation, identification and characterization of specific HFTs;
Clear and accurate profit calculations for HFTs will be accomplished; resolution of who HFTs
trade against to show how and from whom HFTs obtain their profits; The analysis can be carried
out over many years, so we can study how the HFT evolves over time; Our database covers all
the stocks traded on the ASX and is not restricted to sub-sample of stock. This results in a biasfree study.
A/Prof. Paul Lajbcygier
PERMANENT PRICE IMPACT
As a consequence of recent technological advances the cost of trading in financial markets has
irrevocably changed. One important change relates to how trading affects prices; this is known
as price impact. Understanding price impact is vital as it helps in evaluating different trading
8

strategies and hence leads to optimal execution strategies that minimize trading costs. Recent
studies show that price impact exhibits temporal characteristics. That is, the execution of a trade
will impact prices of subsequent trades and may move prices to a new equilibrium. This is the
mechanism by which the market correctly prices assets and becomes efficient. We propose to
model permanent price impact in this work.

Prof Gael Martin


Forecasting S&P500 Volatility: Assessing the Relative Performance of Option-Implied
and Returns-based Measures
Volatility estimates play a central role in financial applications, with accurate forecasts of future
volatility being critical for asset pricing, portfolio management and Value at Risk (VaR)
calculations. Along with the information on volatility embedded in historical returns on a
financial asset, the prices of options written on the asset also shed light on the option market's
assessment of the volatility that is expected to prevail over the remaining life of the options. As
such, many forecasting exercises have used both sources of market data to extract information
on future volatility, with the relative accuracy of the option- and returns-based forecasts being
gauged via a variety of means.
In this project you will explore the relative forecasting power of various options- and returnsbased measures of volatility, using measures constructed for the S&P500 stock index for a
period that incorporates the recent global financial crisis.
References:
1. Blair, B.J, Poon, S-H. and Taylor, S.J. 2001. Forecasting S&P100 Volatility: the
Incremental Information Content of Implied Volatilities and High Frequency Index
Returns, Journal of Econometrics, 105: 5-26.
2. Busch, T, Christensen, B.J., and Nielsen, M.O. 2011. The Role of Implied Volatility in
Forecasting Future Realized Volatility and Jumps in Foreign Exchange, Stock and Bond
Markets, Journal of Econometrics, 160, 48-57.
3. Koopman, S. J., Jungbacker, B. and Hol, E. 2005. Forecasting Daily Variability of the
S&P100 Stock Index using Historical, Realized and Implied Volatility Measurements,
Journal of Empirical Finance, 12: 445-475.
4. Martens, M. and Zein, J. 2004. Predicting Financial Volatility: High-Frequency Time
Series Forecasts Vis-a-Vis Implied Volatility, Journal of Futures Markets, 24: 10051028.
5. Martin, G.M., Reidy, R. and Wright, J. 2009. Does the Option Market Produce Superior
Forecasts of Noise-corrected Volatility Measures? Journal of Applied Econometrics,
24: 77-104.
9

6. Pong, S., Shackleton, M.B., Taylor, S.J. and Xu, X. 2004. Forecasting Currency
Volatility: a Comparison of Implied Volatilities and AR(FI)MA Models, Journal of
Banking and Finance, 28: 2541-2563.
Additional discussion of volatility forecasting using high frequency measures (plus more
references) can be found in:
1. Maneesoonthorn, W., Martin, G.M, Forbes, C.S. and Grose, S., 2012, "Probabilistic
Forecasts of Volatility and its Risk Premia". Journal of Econometrics, 171, 217-236.
2. Maneesoonthorn, W., Forbes, C.S. and Martin, G.M, 2013, "Inference on Self-Exciting
Jumps in Prices and Volatility using High Frequency Measures. Working paper version
available at: http://www.buseco.monash.edu.au/ebs/pubs/wpapers/2013/wp28-13.pdf .
Also downloadable from arXiv.org: http://arxiv.org/abs/1401.3911
Note that an understanding of the (Bayesian) methodology used in these papers is not required.

Dr Colin OHare
Colin is happy to take on research topics in the area of mortality. Students are encouraged to
talk to Colin directly.

Prof Don Poskitt


Quantile Regression, Probability Forecasts and Trading Strategies
The purpose of this project is to examine the use of a technique called quantile regression as a
tool for constructing forecast distributions and then using such distributions to implement
trading rules more specifically the Kelly criterion and modifications thereof. The issues that
need to be addresses are:
_ How can quantile regression be used to produce a forecast distribution?
_ Having constructed a distribution how can we conduct appropriate inference? recognising
that the forecast distribution is a point estimate of the distribution of a future value.
_ Is it possible to determine suitable diagnostic devises and associated calibration techniques?
_ Given a simple trading rule, what is the impact of the previous features on the performance
of the rule?
It is unlikely that all of the issues raised here will be completely resolved because of the limited
time available, but an attempt will be made to make the first steps in solving these problems.
10

An introduction to the basic ideas involved in quantile regression, with a collection of


references to early literature on this topic, is given in Judge, Hill, Griffiths, Lutkepohl, and Lee
(1998, x20.4). A more detailed exposition can be found in Koenker (2005). Chernozhukov,
FernadezVal, and Galichon (2010) presents a review of more recent contributions with a
discussion of outstanding issues and a development of some ideas geared towards their solution.
The latter are of direct relevance to the ultimate solution to the problems and issues outlined
above.
The Kelly criterion (as it is commonly called) provides the solution to the following problem:
If offered a risky venture (bet), how much should you invest (wager)? It is named after John
Kelly (Kelly, 1956), a scientist working at Bell Labs on problems in the then fledgling discipline
of information theory. Kellys formula maximizes the return on invested wealth and is related
to the seminal work of Daniel Bernoulli who in 1738 postulated that the marginal utility of an
extra amount of money was proportional to the persons wealth (Bernoulli, 1954). Kellys
formula was resuscitated just over a decade later in a paper by Thorp (1969) and this gave rise
to much critical analysis. An historical account of the Kelly criterion and of its use at the
gambling table and in the investment industry is given in Poundstone (2005). The publication
of Poundstones book Fortune's Formula brought the Kelly capital growth criterion to the
attention of investors and rekindled interest in its application. MacLean, Thorp, and Ziemba
(2010) provides a discussion of the theory and practice of Kelly strategies, and a fuller analysis
of the advantages and disadvantages, and is undeniably the primary reference. For a
straightforward
introduction
to
Kellys
formula
see
http://en.wikipedia.org/wiki/Kelly_criterion
Numerical implementation is important and you will need to develop programming skills and
learn to work with software such as Matlab and/or R.
Prerequisites: ETC3400 and ETC3450
References
Bernoulli, D. (1954): Exposition of a new theory on the measurement of risk, Econometrica,
22, 2336. Translated by Louise Sommer.
Chernozhukov, V., I. Fernadez{Val, and A. Galichon (2010): Quantile and Probability Curves
Without Crossing, Econometrica, 78, 10931125.
Judge, G. G., R. C. Hill, W. E. Griffiths, H. Lutk_epohl, and T. C. Lee (1998): Tthe Theory and
Practice of Econometrics, New York: J. Wiley, second edition.
Kelly, J. (1956): A New Interpretation of Information Rule, Bell System Technical Journal.,
35, 917926.
Koenker, R. (2005): Quantile Regression, Econometric Society Monograph, Cambridge
University Press.

11

MacLean, L., E. Thorp, and W. Ziemba, eds. (2010): The Kelly Capital Growth Investment
Criterion: Theory and Practice, World Scientific.
Poundstone, W. (2005): Fortune's Formula: The Untold Story of the Scienti_c System That Beat
the Casinos and Wall Street, Hill and Wang.
Thorp, E. O. (1969): Optimal Gambling Systems for Favorable Games, Revue De L'Institut
International De Statistique, 37, 273293.

Dr Vasilis Sarafidis
The Effect of Crime on Housing Prices: An empirical panel data analysis
One of the most widely studied effects of crime is the impact that crime may have on housing
prices. A major drawback of many of these studies is that they fail to control for endogeneity
and unobserved heterogeneity. The main objective of this project is to apply panel data analysis
to estimate the effect of different types of crime on housing values, such as alcohol-related
crime crime, assaults, robberies etc. Australian level data will be used. Spatial spillover and
contagion effects across different neighbourhoods may be taken into account.
Suggested references:
Boggess, L., Greenbaumb, R. and G. Titac (2013) Does Crime Drive Housing Values?
Evidence from Los Angeles, Journal of Crime and Justice, Vol. 36, pg. 299-318.
Ihlafenfeldt, K. and T Mayock (2010) Panel Data Estimates of the Effects of Different Types
of Crime on Housing Prices, Regional Science and Urban Economics, Vol. 40, pg. 161-172.

Dr Vasilis Sarafidis
The 'alcohol availability hypothesis': An empirical analysis using panel data
The availability hypothesis posits that the greater the availability of alcohol, the more likely
there is to be alcohol-related harm. In context with this hypothesis, availability is influenced by
the number of outlets, the hours they trade and the price of alcohol. A number of studies find a
positive correlation between alcohol outlet density and alcohol-related crime, while alcohol
trading hours during night times has also been shown to be a contributing factor in to alcoholrelated violence.
The purpose of this project is to analyse and test empirically the availability hypothesis using
panel data techniques based on an Australian sample.
Suggested references:

12

Burgess, M. and S. Moffatt (2011) The association between alcohol outlet density and assaults
on and around licensed premises, Crime and Justice Bulletin, Vol. 147, pg. 299-318.
Gyimah-Brempong, K. and J. Racine (2006). Alcohol availability and crime: a robust
approach," Applied Economics, Vol. 38, pg. 1293-1307.
Dr Vasilis Sarafidis and Prof. Param Silvapulle
Dynamic panel data modelling of sovereign bond yield spreads and their drivers
Following the collapse of Lehman Brothers in 15 September 2009, the sovereign bond yield
spread has been widening in the EMU countries, particularly in peripheral countries Greece,
Portugal, Ireland, Spain and Italy. The aim of this project is to employ a panel data model to
determine the main drivers of bond yield spreads before and after the credit crisis. Recently,
Gomez-Puig et al. (2014) list a comprehensive country specific and global variables which they
included in the panel data model. So far, several empirical studies have focused on 10-16
EMU/EU countries. This project will include 25 countries, and a number country specific and
global variables which would be selected from the references given below, and then apply a
dynamic panel data model to determine the drivers of the bond yield spreads before and after
the 2009 credit crisis.
References:
DellErba, Hausmann and Panizza (2013), Debt levels, debt composition and sovereign
spreads in emerging and advanced economies, which is available in:
http://oxrep.oxfordjournals.org/content/29/3/518.short
Gomez-Puig, M., Sosvilla-Rivero, S. and Ramos-Herrera, M (2014), An update of EMU
Sovereign yield spread drivers in times of crisi: A panel data analysis, IREA Working Paper
2014/07.
Martinez, L.B., Terceo, A. Teruel, M. A. (2012), Sovereign Bond Spreads Analysis in the
European Union and European Monetary Union: a Panel Data Framework
Matei, I and Cheptea, A. (2013), Sovereign bond spread drivers in the EU market in the
aftermath of the global financial crisis. HAL archives-ouvertes.
A/Prof. Ralph Snyder
Economic Forecasting with a Damped Trend
The focus of this project would be on forecasting common Australian macroeconomic time
series using a modified version of damped trend exponential smoothing (Gardner Jr &
13

McKenzie, 1985). The model underlying this method has a time dependent mean which follows
a random walk. The random walk is augmented by a short-run growth rate which is governed
by an autoregressive process. It is anticipated that the short-run growth rate adapts to the state
of the business cycle.
The modification to the traditional damped trend model would be the inclusion of a constant
long-run growth rate into the autoregressive component (Snyder, 2006). Questions which might
be explored are:

Does the short-run growth rate reflect the effect of the business cycle?

Does the inclusion of a long-run growth rate improve the forecasts?

How does this cycle in growth model compare with a traditional cycle in level model
such as the Beveridge-Nelson model (Beveridge & Nelson, 1981)?
The required calculations could be done with Microsoft Excel, Matlab, Gauss, R, Eviews or
any other statistics or econometrics package with a programming capacity.
References:
Beveridge, S., & Nelson, C. R. (1981). A new approach to the decomposition of economic time
series into permanent and transient components with particular attention to the
measurement of the business cycle. Journal of Monetary Economics, 7, 151-174.
Gardner Jr, E. S., & McKenzie, E. D. (1985). Forecasting trends in time series. Management
Science, 1237-1246.
Snyder, R. (2006). Comments on Gardners new state of the art paper. International Journal of
Forecasting, 22, 673-676. doi:10.1016/j.ijforecast.2006.05.002
John Stapleton
Fiscal Deficits and Inflation
The relationship between fiscal deficits and inflation has long been a matter of dispute among
theoretical economists. While some economists argue on theoretical grounds that there is a
strong positive relationship between fiscal deficits and inflation others, such as Robert Barro,
contend that there is no relationship.
The issue of the relationship between fiscal deficits and inflation has assumed increased
importance in the wake of the global financial crisis as governments around the world have
massively increased deficit spending in an attempt to combat recession. We propose to
investigate the relationship by conducting an empirical study using panel data.
Structure:
This topic has three components:
14

A review of the economic literature on the relationship between fiscal deficits and
inflation.
A review of the econometric literature on testing for unit roots and cointegration in
panel data.
An empirical study.

Prerequisites:

Successful completion of ETC3450 is absolutely essential.


Successful completion of ETC3410 is desirable.
Some knowledge of macroeconomics is desirable.

References:
Cato, L, and M.Terrones (2003), Fiscal Deficits and Inflation, IMF Working Paper,
WP/03/65.
Barro, R. (1989), Ricardian Approach to Budget Deficits, Journal of Economic
Perspectives, Vol 3, No.2, pp37-54.
Hadri, K. (2000), Testing for Stationarity in Heterogeneous Panel Data, Econometrics
Journal 3, pp148-161.
Levin, A., Lin, C. And Chu, C. (2002), Unit Root Tests in Panel Data: Asymptotic and
Finite Sample Properties, Journal of Econometrics,108,pp1-24.
Im, K., Pesaran, H and Shin (2003), Testing for Unit Roots in Heterogeneous Panels,
Journal of Econometrics, 115, pp53-74.
Larsson, R. , Lyhagen, J. And Lothgren, M. (2001), Likelihood-based Cointegration Tests
in Heterogeneous Panels, Econometrics Journal 4, pp109-142.

Prof. Xueyan Zhao


An Epidemic of Prescription Drug Addiction? - Addictive Prescription Drug Use among
Australians
There have been recent concerns for rising uses of legal prescription drugs in the developed
countries. Media reports suggest that, due to increasing pressure from modern lifestyle, there is
an epidemic of prescription drug abuse in response to complaints such as sleeping problems,
stress, and depression, but such drugs are potentially addictive and harmful. This project seeks
to provide empirical evidence to examine issues surrounding this claim of epidemic trend. Have
the medical and non-medical uses of prescription drugs such as painkillers, tranquilizers, antidepressants, and steroids increased in Australia over the past decade? Are these middle class
addiction as some media reports suggested? How do the prevalence rates vary by users
education, income, occupation and workers employment industries? Are professionals more
likely to abuse tranquilizers and workers in hospitality or recreation industries more likely to
abuse steroid? How does prescription drug abuse relate to the users mental health status? Large
individual level survey dataset from the Australian National Drug Strategy Household Surveys
15

will be used to examine the trend for prescription drug use and abuse in the Australian
population, and to investigate the user characteristics associated with the consumption of
several addictive prescription drugs, including painkillers, tranquilisers, anti-depressants and
steroids. Correlation across the uses of different prescription drugs will also be examined.
The student should have completed second and third year Applied Econometrics subjects (ETC
2410 and ETC 3410) and is currently taking the fourth year Microeconometrics subject (ETC
4420).
The following are reference papers for similar analyses of other recreational drug uses.
References:
Ramful, P. and Zhao, X. (2009). Demand for Marijuana, Cocaine and Heroin in Australia: A
Multivariate Probit Approach. Applied Economics, 41(4): 481-496.
Ramful, P. and Zhao, X. (2008), Heterogeneity in Alcohol Consumption: The Case of Beer,
Wine and Spirits in Australia, Economic Record 84(265): 207-222.

Prof. Xueyan Zhao


Elective surgery waiting time in Victoria public hospitals: equity issues
There has been increasing media attention regarding prolonged waiting time for elective
surgeries in public hospitals in Australia, especially for those without private health insurance.
Access to elective surgeries is one of the key indicators for equity in health service provision in
a universally public insured healthcare system like that in Australia. Demand for elective
surgery services in Victoria has increased, and prolonged waiting time for public hospital
admission for elective surgeries such as knee replacement and cataract surgery has been under
media attention. Patients suffer or even die while awaiting a public hospital bed. The objective
of this project is to identify factors contributing to inequality in waiting times for elective
surgery services in the Victoria public hospitals. Administrative individual patient data for
waiting times in Victorian public hospitals in the Elective Surgery Information System (ESIS;
see http://www.health.vic.gov.au/hdss/esis/index.htm) will be used to study the factors
contributing to the length of waiting time for elective surgeries. Particular attention will be
given to equity issues such as rural versus urban patients, insured versus uninsured, and
variations by types of surgeries and urgency categories. One hypothesis is that rural and remote
patients are disadvantaged for higher urgency category procedures such as cardiovascular

16

surgeries whilst urban patients are disadvantaged for less urgent categories such as hip
replacement.
Regression models will be used. Panel or multi-level cluster features of the data can be explored
if desired. Detailed research questions and econometric techniques used can vary by the
students interest and background.
The student should have completed second and third year Applied Econometrics subjects (ETC
2410 and ETC 3410) or equivalent, and preferably has completed ETC 4420
Microeconometrics.

Reference:
For research motivation and policy questions using NSW hospital patient data, see the following
papers. Note the econometric model used will not be the same as (and will be simpler than) that
used in the following papers.
Johar, M., Jones, G., Keane, M.P., Savage, E. & Stavrunova, O. 2013, 'Discrimination in a
universal health system: Explaining socioeconomic waiting time gaps', Journal of Health
Economics, vol. 32, no. 1, pp. 181-194.
Johar, M., Savage, E., Stavrunova, O., Jones, G. & Keane, M. 2012, 'Geographic Differences
in Hospital Waiting Times', Economic Record, vol. 88, no. 281, pp. 165-181.

17

You might also like