You are on page 1of 6

Enterprise Risk

Structuring an Efficient
Program for
Model Governance
by Niall Lynas and Elizabeth Mays
Model governance refers to the practices undertaken to of model governance initiatives across the industry.
promote the proper development, use, and ongoing vali- There has been a tendency for model governance initia-
dation of models. The purpose of model governance is to tives to gravitate in one of two directions. When portfolios
minimize the risks associated with relying on models to are stable and experiencing few losses, or if model risk is not
make or inform decisions within an institution. deemed to be a priority, there is a desire to implement what
Financial institutions use models for a wide range of pur- could be considered a “tick the box” approach (a structure
poses, including forecasting credit losses, valuing complex designed to satisfy minimum regulatory requirements, but
derivatives, and evaluating exposures to changing interest with as little expenditure as possible). But when portfolios
rates. Banks are relying more and more on models to make have experienced significant losses that models failed to
automated decisions and to provide guidance to risk manag- predict, or when management has experienced regulator or
ers on other complex and important decisions. auditor pressure, there can be a tendency to implement an
This heightened reliance, in combination with regulatory extensive program that applies the most rigorous governance
issuances related to modeling and a generally increasing fo- requirements to a comprehensive set of models.
cus on risk mitigation over the past decade, has led banks to Neither approach is usually appropriate. Clearly, a limited
invest in improving their model governance infrastructures. governance infrastructure that 1) lacks senior management
Furthermore, models have been assigned a large share of support, 2) is executed by personnel who lack the requi-
the blame in the recent financial crisis for failing to capture site skills, or 3) misses key governance elements can lead
important risks embedded in transactions, lending prod- to significant model risk. A history of stable, predictable
ucts, and securities. These issues have quickened adoption losses and satisfactory model performance should not lull

Olly/Shutterstock

44 March 2010 The RMA Journal


institutions into believing model governance is not needed.
Historical stability is no guarantee that the future will be Before designing a
similar. In fact, the very reason models are used is to lever-
age predictive information to identify movements in key
performance metrics before those movements occur.
governance framework,
organizations should
An overly extensive program also can be harmful. First,
there is the cost of staffing governance and validation func-
tions. Given the widespread use of models, this cost can be
very significant. Second, making governance too challenging
can hinder the ability of management to adapt model usage take time to consider
as the business environment changes. Worse, requirements
that are viewed as too onerous can lead business leaders to
avoid using models altogether.
why model governance
is important.
This article discusses factors that should be considered
when planning a model governance framework, including
understanding why governance is important and the role that
key elements of governance play in mitigating model risks.
Finally, we sketch out an example framework that categorizes
models according to their degree of risk, and the complexity nance guidelines, governance practices should be followed
of the risk being managed, in order to ensure that appropriate not just to comply with regulatory guidelines, but to help
practices are applied in all circumstances. limit the risk of improper and imprudent decisions being
made as a result of relying on models. Governance practices
Why Do We Need Model Governance? are intended to ensure that models are properly built, that
Before designing a governance framework, organizations they address the needs they are intended to address, that
should take time to consider why model governance is im- users are fully informed of model assumptions and limita-
portant. Models are representations of relationships between tions (and all models have limitations), and that models
causal variables and outcome variables; as such, they are continue to perform as intended over time.
subject to being wrong. All models carry risk.
The development and ongoing management of models Elements of a Sound Program for Model Governance
can involve complex processes requiring specialized knowl- Model Policy
edge and consideration of many types of information. If not Perhaps the most important ingredient to a sound gover-
performed correctly, there can be sizable negative impacts nance program is a model policy. A formal policy is crucial
on the business. These include rapid deterioration of model to successful implementation of a governance program,
performance (leading to financial loss), models that perform as it should provide interpretation of regulatory require-
significantly worse than those used by peer organizations, ments that are applicable to the organization, as well as
and materially inaccurate risk measurement. Model errors the organization’s own approach to mitigating model risk.
and poor model controls also subject financial institutions Given the considerable costs associated with governance,
to considerable compliance risk and reputation risk. the policy should reflect the “tone at the top” and convey
Model risk can arise from many sources. First, there could senior management’s commitment to model governance,
be mistakes made in model building, including using inap- both through statements within the policy and by the level of
propriate assumptions and statistical techniques or flawed senior management engagement that the policy requires.
data. It’s not uncommon to find errors in data used to develop
models, especially when data is being combined from several Model Inventory
sources. Second, a model may fail to incorporate all risk A model inventory is a catalogue of models that allows
factors that influence the outcome being modeled. Also, the management to have a holistic view of the model risk faced
relationship between risk factors that are included and the by the organization and to understand where (organiza-
outcome may change relative to what it was at the time the tionally) that model risk resides. The inventory also helps
model was developed. This can happen when market par- promote consistent application of governance practices
ticipants, economic conditions, or processes inside the bank across business units.
change. Another primary source of model risk arises from In Bulletin 2000–16, the OCC recommends that a cata-
inappropriate application of models to populations beyond logue of models be maintained at the corporate-wide level
those for which the model was originally designed. and, via the Comptroller’s Handbook on Retail Lending, it
Although bank regulators have put forth model gover- provides a useful list of what information should be included

The RMA Journal March 2010 45


for each model: name, description, type, date developed, for making the decision to replace the model in the event
source (in-house versus vendor), purpose, last validation of its deterioration.
date, next validation date, and the names of management It may be useful for the bank to develop templates that lay
contacts.1 The model out exactly what information is required as part of the docu-
In drafting independent inventory should also mentation. Having consistent documentation standards for
include a log of all similar model development projects facilitates independent
review requirements, an outstanding issues that review and allows the model developers to know what is
important consideration is pertain to the models, expected at the start of any development project.
whether a unit dedicated including validation
status and due dates, Independent Peer Review
solely to validation and management con- Modeling processes, especially model development pro-
should be established. tacts for remediation. cesses, can be complex and comprise many different
Some organizations procedures. Often, these will include the application of
also use their model inventory as a repository for storing statistical analyses that may not be within the expertise of
all model-related documents, including model development the managers overseeing the development project or of the
documentation and validation reports. auditors who perform regular reviews. In these circum-
stances, an independent review by a “peer” individual or
Model Documentation team (which has experience in the type of modeling ap-
Model documentation serves two purposes: It facilitates proach that was used and at identifying risks in modeling
independent review of the model development and serves processes) can help reduce model risk. It also can facilitate
as a corporate memory of the methods, data, and assump- the identification of assumptions that are inappropriate
tions embedded in the model. Model users and reviewers or not fully understood. This latter risk is particularly
may refer to the documentation over the life of the model important, as assumptions can come in many different
to recall details that will help inform decisions related to forms ranging from data assumptions (for example, that
model performance and usage. Auditors and regulators also the specific sample chosen is the most relevant to the one
will want to see this documentation to evaluate model de- on which the model will be used) to statistical assumptions
velopment practices and methods. (for example, losses will follow a specific distribution, or
Institutions may wish to adopt varying documentation there is a specific relationship between dependent and
standards for different categories of models, but there is a independent variables).
core set of topics that should be addressed for all model In drafting independent review requirements, an im-
developments: portant consideration is whether a unit dedicated solely to
• What is the purpose of the model and how will it be validation should be established. A dedicated unit allows
used? validation personnel to focus more exclusively on valida-
• What is the outcome being predicted or estimated? tion work and on improving their validation skills (which
• Which data was used to develop the model (sources, can differ from the skills needed in developing models).
variables, historical time frames, counts, and so forth)? Furthermore, a structure involving a dedicated unit is also
• What are the assumptions or judgmental components commonly viewed as enhancing independence, which, as
of the model? noted in RMA’s Model Validation and Governance Survey,2
• What are the model limitations? For example, if the model may now be considered a “leading practice.”
was built for a direct-to-consumer home equity portfolio, However, larger organizations may also have the option of
it should probably not be used for a broker portfolio. If a pulling resources from other development teams to provide
model was built using data from a prime auto portfolio, independent reviews. This can enable validation person-
it should not be used for a subprime portfolio. nel to keep their development skills sharp, and it often
• What are the methodologies and techniques used? For allows for a larger pool of resources from which validation
example, is the model statistical, is it a set of cash flow personnel can be selected (thus making it easier to identify
equations, is it based on simple moving averages, or a reviewers with appropriate experience).
combination of all three? Does it make use of a Monte Whichever structure is used for independent reviews,
Carlo simulation or scenario analysis? the organization should be careful to adhere to the spirit of
• Who are the model developers and approvers? independence. This usually goes beyond simply requiring
• How and how often will the model be benchmarked and that the reviewers had no involvement in the modeling
validated going forward? process; it requires that they be free of influence that would
• How well did the model perform on a “benchmark” data prevent them from providing an honest critique of issues
set (defined below)? This information provides a standard related to the model.3

46 March 2010 The RMA Journal


Back Testing sure there are no calculation errors. This testing can include
Perhaps the most important principle of model governance selecting a sample of records for manual review or develop-
is that models be back tested on an ongoing basis to ensure ing separate code to implement the same algorithm.
they continue to perform in line with expectations. Back
testing compares model forecasts or estimates generated in Oversight and Change Management
the past to actual outcomes. Using these results, analysts can A key component of a governance program is the need
calculate model accuracy measures, which are then compared for adequate oversight and change management controls.
to benchmark accuracy measures to determine if the model Oversight should involve active participation by managers
remains accurate enough to use in its current form. in the management of the model. While managers may
In developing benchmark accuracy measures, model not need deep knowledge of all the nuances of the devel-
performance should be measured on the development data opment process, they must have sufficient familiarity to
sample as well as on a separate “validation sample.”4 One of enable them to understand the key model risks related to
these samples (usually the validation sample) is used to estab- the model’s intended use. This will entail understanding
lish a benchmark set of performance metrics to which future the data that was (and wasn’t) used, the general process
model back-testing results will be compared. The relevant that was used to specify the model, the nature of the risks,
metrics will depend on the type of model, but may include and the key assumptions. Management approval for the
the Kolmogorov–Smirnov (K–S) statistic for scorecards, or use of new models or changes to existing models should
perhaps root mean squared error for forecasting models. be formalized.
By comparing the back test’s metrics to the benchmark Having access to model details or permission to change
metrics, an analyst can see if deterioration in model accuracy a model may be an important issue, depending on the
has taken place. Accuracy thresholds should be identified, model purpose and construct. Access to models used for
although they should not be viewed as a “pass/fail” mark. decision making typically will be restricted to reduce the
Instead, they indicate a point at which additional actions chance that knowledge of decision criteria will influence
should be taken. This may simply involve running additional data inputs to the model. Restricting access to models used
reports or making an ad hoc analysis so that reasons for shifts to quantify significant financial statement entries may be
can be understood and the need for model alterations can a key control under a company’s Sarbanes–Oxley Section
be evaluated. If a model is found to be underperforming, a 404 compliance program.
decision must be made to modify or replace it. Sometimes
models can be improved by “calibrating” model parameters Stress Testing
using recent data. Other times, models will need to be rebuilt, Although stress tests have received much attention in recent
or an entirely new model with a different structure will need years, formalized approaches can still be quite rare and are
to be implemented. In either case, a timeline should be set out often limited to Basel II reporting requirements. Two types
for calibrating or replacing the model, and the documentation of stress tests that should be applied to most models are
and independent review process will start once again. scenario analysis and sensitivity analysis.
Scenario analysis involves computing model output under
Benchmarking a specific set of assumptions, which are usually associated
Benchmarking compares a model’s results to those of a second with macroeconomic conditions. This type of test is of par-
model that predicts the same (or a similar) outcome. Large ticular importance for any model that estimates the future
differences between the two models’ results should be evalu- value of a metric that fluctuates significantly throughout the
ated, and the reasons for the differences should be used to various stages of the economic cycle.6 Ideally, a series of
detect inappropriate assumptions or errors in calculations. scenarios should be de-
Benchmarking is best performed using independently signed at the enterprise A key component of a
constructed alternative models or “well validated” existing level so that results can
models. If an organization determines that such benchmarks be consolidated across governance program is
don’t exist, and if the model risk is not considered high, it portfolios. the need for adequate
may also determine that developing a benchmark model Sensitivity analysis
is not warranted. In such cases, benchmarking can still be shows how model out-
oversight and change
performed by either selecting models that predict similar put is impacted by fluc- management controls.
outcomes or identifying industry metrics associated with tuations in one or more
similar portfolios.5 variables or parameters. It provides insight into whether the
model is too reliant on one or more variables or parameters
Computational Testing and indicates the level of model risk due to the potential
Model code should be tested on a regular basis to make volatility of model output.

The RMA Journal March 2010 47


Figure 1 that developed the model. In fact, selecting such individu-
als may be the most appropriate choice if the modeling
Model Risk Categorization approach involves highly specialized skills typically found
only within one area of the organization.
High Consideration of “model purpose” and “model complexity”
Complexity
allows us to categorize an inventory of models into quad-
rants as illustrated in Figure 1. By categorizing models in this
B D way, the quadrants represent homogeneous groups of models
Modeling Process Complexity

relative to the importance of each of the model governance


elements that were listed earlier. Therefore, we can describe
how each element should be applied to the models in each
A C of the four quadrants, as illustrated in Table 1.
The size and culture of an organization will affect the
manner in which such a framework should be imple-
Out of Scope mented, including the definition of a model, the number
Low of categories and how they are defined, and how each
Complexity
Low Risk High Risk governance element is applied to each category. However,
Purpose
this example shows how the potentially complex risks
associated with models can be addressed with a frame-
Example of a Model Governance Framework work that facilitates ease of implementation while allowing
Given that the definitions of a model and model risk are flexibility to avoid some of the more burdensome costs
open to interpretation, and in light of the considerable associated with governance.
cost of model governance, an essential step in developing
governance infrastructure is to define the scope of tools Conclusion
and analyses that will be considered “models” and subject Banks that ignore model risk or implement frameworks
to the requirements of model policies. This step will typi- that are too light to really control model risk do so at their
cally exclude ad hoc analyses, simple historical averages, own peril. There are many organizations that can attest to
and other tools where model risk is deemed immaterial. the significant costs, financial and otherwise, of assuming
Once the scope has been defined, the cost of governance that their models will continue to work well, only to find
can be reduced further by categorizing models according that they don’t. Nevertheless, many institutions may view
to their levels of risk. To do so properly, however, one may the potentially high costs associated with implementing a
need to take multiple considerations into account when sound governance program as a barrier.
determining the treatment of each model. Consideration If an organization takes the time to carefully plan its
should be given to a model’s complexity of design and approach to model governance—taking into account the
its importance within the bank in determining risk man- reasons why governance is important, the various aspects
agement and business decisions (referred to as “model that make up model risk, and the elements of governance
purpose”). and how they can be used to mitigate those risks—and then
For example, consider two models, both deemed to structures a program that targets the elements of governance
have high model risk. The purpose of the first model is to to the models where each element will be most effective, a
forecast losses, and it influences a range of processes such structure can be implemented that mitigates key risks while
as loss reserves, budgeting, and collections staff manage- not becoming unnecessarily burdensome. v
ment. Even if the modeling approach is straightforward and
not considered complex, the model may be deemed high ••
risk because of its widespread influence on management Elizabeth Mays is senior vice president and head of consumer risk modeling and ana-
decisions. It may be appropriate to require an independent lytics for JPMorgan Chase. Contact her at elizabeth.mays@chase.com. Niall Lynas is
review carried out by a team outside the risk and loss fore- a senior risk manager at JPMorgan Chase. Contact him at niall.lynas@chase.com.
casting function so that it is as far removed organizationally
The views expressed in this paper do not necessarily represent the views of JPMorgn
from the development team as possible.7
Chase & Co.
Now consider a model that is used to value an asset or
liability and is considered high risk owing to the complexity
of the modeling process. In this case, it may be much easier
Notes
to identify individuals who don’t have a vested interest in 1. Although written for retail lending, and the inventory pertains to
the model output, but are close organizationally to the team credit-scoring models, the items listed apply to most types of models.

48 March 2010 The RMA Journal


Table 1

Approach for Applying Governance Elements to Model Categories

Quadrant
Governance Requirement
A B C D

Inventory tracking Required.


Documentation Reduced level of documentation required. High level of documentation required.
Independent peer review Not required. Required. The review team needs Required. The review team must Required. The review team needs
to have experience in the specific have experience relevant to the to have experience in the specific
modeling approach and purpose, model purpose, but does not need modeling approach and purpose.
but can be selected from any area experience in the specific approach A high degree of organizational
of the organization (as long as its being used. A high degree of separation is required to ensure
members were not involved in the organizational separation is required independence.
model development or ongoing to ensure independence (e.g.,
management). validation personnel are selected
from a different group or line of
business).
Back testing/empirical validation Simple metrics (actual versus Back testing of all model segments, with tolerance thresholds and action Full back testing, including analysis
predicted at the portfolio level). plans. of individual model variables,
tolerance thresholds, and action
plans.
Benchmarking Appropriate benchmarks are required to be identified and reviewed. Appropriate benchmarks are required Benchmarks must be independently
Benchmarks can be industry metrics or models that predict similar outcomes, to be identified and reviewed. constructed alternative models or
and trends are analyzed instead of absolute values. Benchmarks can be industry metrics “well validated” existing models.
or models that predict similar Further analysis is required if
outcomes, and trends are analyzed significant unexplainable differences
instead of absolute values. A are identified.
summary of reasons for differing
trends in results is required to be
compiled on a regular basis.
Model monitoring Required on a quarterly basis. Required on an annual basis. Required on a quarterly basis. Required on a quarterly basis. Must
include analysis of individual model
characteristics.
Computational testing Required on an annual basis. Required on a quarterly basis.
Oversight and change management Protocol for these items does not Protocol must be formalized, with designated oversight personnel, clearly defined approval authorities, and security
need to be formalized. mechanisms to prevent unintended or unapproved alterations to the model.
Stress testing Required for forecasting models, but not for models used to rank order. Required for all model types.

2. See The RMA Journal, November 2009, pp. 60–65. useful for identifying trends that may be relevant to the model be-
ing benchmarked. For example, if a benchmark displays a change
3. True independence may also call for physical separation of review- that is greater than the model output change by more than a cer-
ers from development personnel. In the Securities and Exchange tain threshold, this could trigger additional analysis to determine
Commission’s 2008 assessment of oversight at Bear Stearns (SEC’s the reasons for the change (and, specifically, whether the model is
Oversight of Bear Stearns and Related Entities), the Office of Audits capturing key predictive information that the benchmarks may be
noted that “model validation personnel, modelers, and traders all sat accessing).
together at the same desk” and that this has “the potential disadvan-
tage of reducing the independence of the risk management function 6. While scenario analysis is mostly applicable to forecasting models,
… in both fact and appearance.” it also can be used to show how well models rank-order risk under
economic conditions that are more extreme than those observed dur-
4. The validation sample should not be used in the actual model ing development or “out of time” sample windows. For these “rank
development. Ideally, it should be an “out of time” validation sample ordering” models, scenario analysis also can be used to show how
from a time period different from the development sample. Evaluat- cutoffs might need to be altered as conditions change.
ing the model on an out-of-time validation sample will help ensure
that the model performs well in environments different from the one 7. The nature and size of a portfolio for which a model will be used
represented by the model development sample. may also influence the “purpose” categorization of models. For ex-
ample, models used for very small portfolios may be classified as
5. While the absolute values of these benchmarks may not be of in- having a purpose that poses lower risk to the bank than those used
terest given the different purposes or portfolio profiles, they can be for larger portfolios.

The RMA Journal March 2010 49

You might also like