You are on page 1of 30

Application or Infrastructure Name (Acronym)

Disaster Recovery Plan


ITMS# XXXXX
Forms used for developing and documenting Disaster Recover plans.
Insert Date Modified Here for Version Control (i.e. Version 2013.02.01)

Templates are intended to be a guide for completing department disaster recovery plans and may be modified to meet specific requirements.

For information regarding disaster recovery planning please review the Disaster Recovery website.

Originator: ITO BC/DR team (disaster@ford.com) Page 1 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Table of Contents
Introduction to Disaster Recovery................................................................3
Standard Contacts and Escalation Points....................................................6
DR001 Disaster Recovery Plan Sign-off......................................................9
DR002 Application/Infrastructure Profile.....................................................10
DR003 Vital/Critical Dependencies............................................................12
DR003a Vital/Critical Resources................................................................13
DR004 Disaster Recovery Team Planning Procedure................................14
DR004a Part 1 Disaster Recovery Team Testing Procedure......................19
DR004a Part 2- Disaster Recovery Team Testing Procedure.....................20
DR005 Alternate Hardware Site Details.....................................................23
DR006 Disaster Recovery Command Center (DRCC)...............................24
DR007 Disaster Recovery Plan Maintenance Schedule............................25
DR008 Disaster Recovery Plan History Log............................................26
DR009 Disaster Recovery Distribution Log.............................................27
Change Request Form (Optional) 28

Originator: ITO BC/DR team (disaster@ford.com) Page 2 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Introduction to Disaster Recovery
Disaster Recovery (DR) is the technological aspect of business continuity planning. It is the advance planning and preparations necessary to
minimize loss and ensure continuity of the critical IT components of an organization in the event of disaster (recovery of impacted
infrastructure, data, communications, and/or environment). For more information on Business Continuity (BC), refer to the corporate BC site.
Purpose
The purpose of this document is to provide an outline for a set of structured and formalized activities, system procedures, and action plans that
can be implemented in the event of a disaster that disrupts business operations.

The information presented in this template should be viewed as a guideline to help teams develop a model best suited for their DR
requirements. Therefore each owner responsible for DR planning should add or remove areas of this template as necessary to ensure
successful recovery of their application or infrastructure production environment in the event of a disaster.
Definition and Scope
Unexpected interruptions to normal production will occasionally occur. Such interruptions may be caused by any number of things, such as a
power outage, human error or technical failure within the application or infrastructure. Additionally some disruptions may be a result of a
cascading event at a remote "upstream" or geographically separated site.

Evacuation due to a hazardous materials spill, accessibility problems due to civil unrest, and wide-area power failures are examples of
business interruptions due to remote or foreign conditions. Most interruptions are temporary, with conditions returning to normal within a time
frame considered non-critical for the business environment. Other interruptions can quickly escalate into extended periods that severely
impair an organization's ability to conduct business. As the ability to do business is impaired, the customer base dwindles and market share is
lost.

The term disaster lends itself to a preconceived notion of a large-scale event, usually a natural disaster. In fact, each event must be
addressed within the context of the impact it has on the organization and the surrounding community. What might constitute a nuisance to a
large industrial facility could be a disaster to a small business. To be effective, a DR plan must establish the correct scope. A full
understanding of risk assessment must be performed. This will support the creation of an effective strategy against risk (disaster avoidance),
and risk mitigation (disaster response).
Types of threats to consider
Business and IT face many potential threats. Some are unique to business continuity planning, others to disaster recovery planning. BC and
DR planners should review this list of potential threats, but not be limited by it. New threats emerge, and old ones mutate into new ones. Let
these serve as thought starters for your DR planning efforts:
a. Natural disasters Events such as fire, flood, earthquake, weather. What if your normal work area was destroyed, or simply made
unavailable as a result of a natural (or man-made) disaster? Are records replicated? Do you have an alternate work location?
b. Personnel outage (strike, pandemic) What if an employee strike, or pandemic prevents personnel from coming to work? Who's
cross-trained on key business processes? Can employees work from home via the internet? Have employees been provided the
right tools and have they tested remote access?
c. Threats Threats can include bombs threats, disgruntled employees, etc. How do you respond to these emergency situations
(response plans and evacuation plans)?
d. Just-In-Time What if critical suppliers were unable to meet expectations? Do you have sufficient inventory?
e. Power/water failure Technology systems require electricity and cooling. Without it these systems will shut down. Regulatory
requirements may require potable water be available to employees or businesses must close due to health risks. Do you have a
backup power and water supply?
f. Failure of IT components What would you do if critical systems were down for any length of time? Are there manual processes?
Have personnel been trained to use them?
g. Data Loss - Is your data routinely backed up? Have you tested sample data restoration? What is the process for restoring data
and is it stored offsite or in an approved/safe location?
h. Geo-Political and Civil unrest The global economy presents unique risks that cannot be predicted, nor prevented. In some
countries, internal discord could prevent workers from safely reaching their places of employment, or put workers onsite at risk. Do
you have plans to cover such an event?
Identify Recovery Assumptions, Expectations, and Sign-Off
A full understanding of assumptions, expectations, and potential impact must be performed as part of the overall DR planning effort. This
should be the first step as it will help to identify how to avoid and respond to disasters. Once the DR plan is complete, the application or
infrastructure teams DR Coordinator, and/or manager or department head of the application or infrastructure for which the DR plan is being
developed must sign-off the DR plan to ensure that the core components were sufficiently documented, and will enable recovery teams to
respond to a disaster affecting the application or infrastructure.
ITO infrastructure support teams recover the components such as servers, storage, routers, and operating systems, interfacing
software, databases, monitoring, etc
Application owners MUST have properly configured backups scheduled for their application, and are responsible for validating
recovery of the application
Application and Business Owners have properly contracted appropriate production and recovery systems and maintain current
vendor Service Level Agreements
Recovery plans/tasks are sufficiently documented and available to teams in the event of a business disruption. These may include
run books or other procedures that can be accessed during an emergency.
Key personnel and contact information methods for support teams are defined
The DR plan is used by many different teams. Provide enough detail regarding the application and associated infrastructure as
there will be many different teams working together to help in the recovery effort.
Vital/Critical Dependencies and Resources

Originator: ITO BC/DR team (disaster@ford.com) Page 3 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Vital and critical applications, resources and associated infrastructure dependencies impacting this DR plan must be evaluated and cross-
referenced with proper recovery plans. These dependencies may be viewed as upstream or downstream. Upstream services are those that
your application or infrastructure depend on for production. Downstream dependencies are those services that your application or
infrastructure supports (others depend on you).

Any resources (people, tools, documentation etc) needed and how to obtain them for the recovery effort of applications and infrastructure
must be evaluated and cross-referenced here. Ensure supporting documentation is available off-site.
Hardware, application and/or service templates required for the recovery of the application.
Installation and operations guides required for the recovery of the application.
User Workaround Procedures, Standard Operating Procedures (SOPs) etc
Architectural/Business flow diagrams
Recovery teams, tasks, damage assessment, and roles/responsibilities for applications
and infrastructure
Recovery teams need to have pre-defined tasks (steps) to conduct damage assessment, provide triage, consider mitigation options, and
perform recovery operations. Simply said, recovery teams just want to know what it is they have to do. One effective method is to use a
"timeline" document which defines specific tasks to be performed, when they are to be performed, and by whom. The timeline should include
the following:
Planning (define team members, contact information, central communications information, damage assessment, triage, mitigation,
coordination with crisis management and other support teams, change control management etc)
Core system recovery (recovery team tasks, hardware provisioning, core system recovery etc)
Application/database recovery steps (restoration from backup, perform initial validation etc)
Post recovery (final validation, restart business operations, issues and follow-up etc)

Additionally, the timeline captures other (external) tasks that may need to be performed simultaneously, providing a record of events,
communications, situation report data, change control tracking, and follow-up issues (for an example of the timeline see section DR004 in this
document).
Alternate Site Planning
Access to facilities is critical to business operations. When situations interrupt those routines (weather, local emergency, power outage etc)
alternatives must be considered. For technology teams you need to plan for these events. Define where employees go to perform routine
operational processes if the primary site is not available (and access rights). How are these processes performed? If your primary technician
is not available who will perform the task? What technologies are needed to do the job (VPN permissions, and access to file shares, mirrored
storage, network redundancy, consoles to gain access to the servers and systems)?
Disaster Recovery Command Center (DRCC)
A DRCC is used by a crisis management team or other business management teams to mitigate impact of the event, direct recovery efforts,
and resume operations.

The facility must be sufficiently staffed and equipped to provide effective incident and communication "command and control" in order to
facilitate strategic/tactical decisions, and direct communications to those that have a "need to know" (management, recovery teams,
customers, and as needed the public).

If appropriate this facility may be located away from other business facilities in case a crisis renders the primary DRCC unusable.
Team Awareness and Training
Employees in general and recovery teams also need the correct amount of awareness or skill-set training so they understand what to do in
response to business interruptions. During an emergency employees and recovery teams routine assignments, roles, and responsibilities are
immediately affected (or changed) in order to focus on preserving life, resolving the incident at hand, and minimizing business interruption and
damage to property.

There is a difference between training and awareness. In awareness activities, the learner is the recipient of information, whereas the learner
in a training environment has a more active role.
Awareness relies on reaching broad audiences with attractive packaging techniques. Awareness activities may include (but are not limited to):
How to respond to a disaster-like event (activate or notify crisis management)
How are recovery teams contacted, and where will they report for assignment?
Where is the DR plan located, and how do recovery teams access it?

Training is more formal, having a goal of building knowledge and skills to facilitate the job performance. Training activities may include (but
are not limited to):
Procedures in recovering applications and infrastructure
Performing a tape recovery of applications or databases
Coordination efforts in handling a large outage (communication, triage, task assignments etc)

Proficiency increases when lecture, demonstration, and skill-set testing methods are combined. To be effective, management must endorse
training and awareness, and may involve one or more of the following methods:
Lecture based or self-study - Subject matter training targeted to achieve necessary skill-set proficiency in specific recovery team
tasks, such as roles, responsibilities, or tasks.
Demonstration actual or simulated (hands-on) demonstration of proficiency involving specific recovery tasks)
Recovery team proficiency should be validated to gauge the level of training a recovery team has actually achieved. Knowledge
base testing proficiency may be demonstrated using a combination of lecture Q&A sessions and through demonstration results.

Complete the following templates from the Corporate Business Continuity Templates:
Originator: ITO BC/DR team (disaster@ford.com) Page 4 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
BCP012 Training and Awareness Plan
BCP012a Training and Awareness Evaluation (Optional)

Originator: ITO BC/DR team (disaster@ford.com) Page 5 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Testing
The process by which DR plans are validated is referred to as DR testing. The word "test" tends to make the statement that this is a "pass or
fail" process. Actually, failure occurs only when an organization decides not to develop an effective DR plan or disregards annual DR testing
per ITPM. Types of exercises include:
Communications - Tests the validity of the employee calling tree process and verifies the accuracy of employee emergency
contact information contained in the plan (e.g. employees, critical suppliers, business partners, etc.).
Peer Review - The plan is reviewed for accuracy and completeness by personnel other than the owner or author that have the
appropriate technical, procedural or business knowledge. Sometimes called a blink test, present your plan, state what everyone is
going to do, and see if anyone blinks or flinches. If they do, follow up to find out their concerns. New plans should be peer
reviewed before being considered final and reviewed for discrepancies at least annually.
Table Top Exercise - Raises awareness of business continuity team members who walk through the plan and test their knowledge
of key roles and responsibilities of recovery teams and other actions contained in the plan. Walkthrough testing employs an
example emergency scenario(s) that prompts test participants to describe likely responses to each step of an emergency challenge
to business operations. Walkthrough testing helps participants understand how recovery teams work together, and make
decisions. It is the least complex test as it does not involve shutting down production operations.
Disaster Recovery Test - Test of technology recovery plans used to recovery applications and/or infrastructure (e.g. servers,
network, applications, business objects, etc.). All aspects of key technology (applications and infrastructure) should be tested over
the course of a year.
Business Continuity Test - Test of continuity plans for people and processes. For instance, if your BC plans are to work from an
alternate site, or using an alternate way of doing a task, test those steps. Each key process should be tested annually.
Alternate Site Test - Tests the ability to relocate resources within a SLA (e.g. employees, materials stored off site, systems
connectivity, applications, etc.) to an alternate work site. Also verifies that resources required by individuals are accurately
described in the plan and are able to be provided at the alternate site when required. Alternate Site testing does not necessarily
involve live network or systems connectivity.
Simulated Full Scale Test - Simulates a business interruption and determines how well the plan responds to a specific emergency
event in a simulated operational environment. Simulated full scale testing requires thorough review, planning and appropriate
management approvals and controls in order to avoid interference with production.
Maintenance, Distribution, Storage, and Security
Document maintenance is described as best practices associated with creating, maintaining, and distributing the various types of documentation
associated with DR plans. In an effort to standardize what comprises a well-documented DR plan, a cover page containing "core elements" in
addition to a maintenance page defining when these core elements should be completed.

If your overall Disaster Recovery (DR) plan is comprised of many disparate documents (e.g. run books, vendor specific configuration manuals
etc). These documents should be "published" using various media (paper, or electronic). It may be beneficial to consider using a "front-end"
interface to present user friendly access to the vital/critical recovery documents (example RoboHelp, website, Word, Excel, PowerPoint, PDF etc).

Security of vital/critical documents is necessary per corporate directives. Personal information contained in documents should provide procedures
on protecting the information from getting into the wrong hands. Where appropriate, they should be protected per HIPAA and other regulatory laws.
Electronic versions should be password protected and encrypted where appropriate when stored on portable media such as laptops, USB devices,
or CDs/DVDs.

When developing your DR plans follow corporate documentation and information management standards. Any information which is sensitive (e.g.
personal information, access information, etc) must be labeled as such (e.g. confidential, proprietary etc).

When storing documents off-site and at alternate locations ensure they are protected from plain view or open access. Do not store DR plans in
vehicles. It may be appropriate to use a two person integrity system to provide separation of duties and to prevent any one person from having full
access to sensitive information (e.g. split a password into two sealed envelopes to be opened only by authorized agents).
Final thoughts
The recovery effort often requires restoring applications or rebuilding infrastructure to an alternate host at an alternate site. As a result, some
application and infrastructure components will need to be modified (on the application or supporting core operating system) to restart
production once the recovery tasks are started and completed. Examples could include host names or IP addresses.

The DR plan development process may initially be very complex and involve participation from several different groups (e.g. management
buy-in, end users, customers, application and infrastructure development, engineering and support teams). The process will take time to fully
mature from initial scope development to final plan approval. As the big picture unfolds (e.g. what, when, how, and why of what is being
recovered), thorough DR plan testing and peer review will help surface where gaps should be addressed and resolved or mitigated.

By now it should be realized that DR planning is a continual improvement process which involves thorough evaluation for impact, change, and
testing. Over time through testing real invocation, an effective DR plan will ultimately sustain the business during unplanned outages.

Originator: ITO BC/DR team (disaster@ford.com) Page 6 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Standard Contacts and Escalation Points Purpose: A resource for contacts/escalation points which can be used throughout your DR Plans.

Standard Contacts and Escalation Points


Area Solution Member Email Tool Service Group Phone Alternate #
Main
IT Service Desk First Contact Helpdesk www.request.ford.com 888-317-4957
First Contact GSDICTRL gsdictrl@ford.com www.request.ford.com 800-322-3399 313-248-1166
Thomas
1st Escalation Woods twoods19@ford.com 313-59-46997 313-580-9233
357-3140 or
2nd Escalation Pramanan pramanan@ford.com +91-44-66553140 +918754468934
3rd Escalation Max Good mgood1@ford.com 313-323-9727 313-433-2472
Incident Reference https://www.tc.ford.com/ts/operations/OPS/I
Management Materials PM/default.aspx
Heather
Escalation Dixon hdixon9@ford.com 313-594-2379 313-820-7546
313-702-0141
Escalation Pat Reiner Preiner2@ford.com 313-845-0769 810-632-0242
Disaster Reference http://www.tc.ford.com/ts/operations/OPS/D
Recovery Materials disaster@ford.com S/default.aspx
1st Contact
Mainframe Holly Bianchi hbianchi@ford.com 313-322-1085 313-520-8041
1st Contact Tony
Client/Server Michewicz amichewicz@ford.com 313-337-5485 313-574-0539
1st Contact Wendy
Client/Server Buttermore wbutterm@ford.com 313-845-1791 248-219-0764
1st Contact
Client/Server Mary Houdek mhoudek@ford.com 313-390-7762 734-717-8396
Escalation Steve Minch sminch@ford.com 313-323-8267 313-467-1187
Server "The Americas Server
Deployment First Contact www.request.ford.com Deployment"
Work Cell- 313-805-
1175
Personal Cell- 810-2
1st Escalation Tracy Blaskie tblaskie@ford.com 313-594-3617 4512
2nd Escalation -
Windows Dave Turczyn dturcz1@ford.com 313-845-2649
3rd Escalation - Andy
Unix Florczak aflorcza@ford.com 313-248-2658
Unix Server "The Americas Unix
Operations First Contact www.request.ford.com Server Operations"
Escalation GSD 800-322-3399 313-248-1166
Solaris Server "The Americas Solaris
Operations First Contact www.request.ford.com Server Operations"

Escalation GSD gsdsupp@ford.com www.request.ford.com 866-422-3399 1-313-322-3399


Windows First Contact www.request.ford.com "The Americas
Server Windows Server
Originator: ITO BC/DR team (disaster@ford.com) Page 7 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Standard Contacts and Escalation Points
Area Solution Member Email Tool Service Group Phone Alternate #
Operations Operations"
Escalation Dan Olson dolson12@ford.com 313-845-8804 313-805-1196
"The Americas
Websphere Websphere
Operations First Contact www.request.ford.com Operations"
Escalation www.request.ford.com
TSM Backup "The Americas
Operations First Contact Backup Operations"
1st Escalation
Instant Message tbackup@ford.com
2nd Escalation text page tbackup
3rd Escalation text page stormgt
4th Escalation Bob Plummer rplummer@ford.com 313-390-2556 313-806-9634
ESX Server "The Americas ESX
Operations First Contact www.request.ford.com Server Operations"
Pager 313-796-
0665 Cell 313-574-
1st Escalation ESX On-Call Pager CDSID ESXPRI 6469
2nd Escalation Bill Demshuk wdemshuk@ford.com 313-317-7871 313-575-0542
NAS Storage "The Americas NAS
Operations First Contact www.request.ford.com Storage Operations"
Kerry
Escalation Cardwell kcardwel@ford.com 313-322-0015 313-671-3219
Network
Operations First Contact gsdnet@ford.com www.request.ford.com 313-322-3399
Oracle & SQL https://dept.sp.ford.com/sites/gdms/Pages/e
Server Emergency sc.aspx
http://www.request.ford.com/RequestCenter/
Non- myservices/navigate.do?
Emergency categoryid=73&query=catalog&
FMCC Data Frank
Center D'Amore fdamore@ford.com 313-323-6875 313-806-3912
Building 6 Data Frank
Center D'Amore fdamore@ford.com 313-323-6875 313-806-3912
https://dept.sp.ford.com/sites/gdms/MWSPu
MQ Series b/SitePages/Contacts.aspx

Originator: ITO BC/DR team (disaster@ford.com) Page 8 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
DR001 Disaster Recovery Plan Sign-off

Organization: ______________________________ (e.g. ITO GTAM)


Plan Name: ____________________________ (application or infrastructure name for this plan)
Last Updated: ______________________________ (should line up with last annual review)
I have reviewed the following Disaster Recovery plan documentation and, to the best of my knowledge, the information contained
herein is accurate and complete.

The following Disaster Recovery elements are addressed in the plan:

1. Overview of the application/infrastructure mission in support of the strategic importance to Ford Motor Company
(DR002)

2. Define routine problem management (support and escalation) processes, Business Criticality
Assessments/Operational Level Agreements (BCA/OLA), Recovery Time Objectives (RTO), and Recovery Point
Objectives (RPO) (DR002)

3. Identify major upstream/downstream application/infrastructure groups that may be affected (DR003)

4. Define vital/critical dependencies for this DR plan (DR003)

5. Define vital/critical resources for this DR plan (DR003a)

6. Define recovery teams, recovery tasks, damage assessment, and roles/responsibilities for applications and
infrastructure (DR004)

7. Develop application/infrastructure testing plan (DR004a)

8. Identify alternate hardware site details (DR005)

9. Identify Disaster Recovery Command Center details (DR006)

10. Perform annual employee DR training and roles/responsibility awareness (BCP012, BCP012a)

11. Complete annual DR Plan testing, resolution of issues, lessons learned, and update of DR plans (DR007, BCP013,
BCP014)

12. Perform annual DR plan maintenance review (DR007)

13. History and Distribution Log for DR Plan (DR008, DR009)

Signed: ______________________________ Date: ____________________

Name (print): __________________________________________


Disaster Recovery Plan Author

Signed: _______________________________ Date: ______________________

Name (print): ___________________________________________


Manager/Department head

The items in italicized blue text are for information only, and therefore they will need to be written for your specific requirements.

Originator: ITO BC/DR team (disaster@ford.com) Page 9 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Date Prepared: Prepared by:
DR002 Application/Infrastructure Profile
EXAMPLE (Insert Application Name Here and OLA Level
Description of Entry Identifying Information
1 Application/Infrastructure Name: MyHub
2 Overview of Application/Infrastructure: Provides automotive supplier network parts requisition support for North and South
American plants
3 Region and Business Unit: ITO GTAM NA HR Fin Acctg
4 Site Name: FMCC Data Center
5 Process Name: N/A
6 Business, Application, or Infrastructure Owner: MyBus Johnson (Business Owner); MyApp Jones (Application Owner); MyInf Smith
(Infrastructure Owner)
7 Routine incident management procedures/support window https://www.tc.ford.com/ts/operations/OPS/IPM/Incident%20Synergy%20Documents/ITI
(Also, indicate the escalation process and as appropriate %20Operations%20Communication%20Framework%20Incident%20Checklist%20v12.xls
potential defect response time): (Example only)
1 Vital/Critical Data from feeder systems is unavailable for report generation via client
application - 4 hrs.
2 Serious Users cannot access the application via web - 8 hrs.
3 Moderate Application is available, but certain areas of the application are not functioning
properly - 3 days
4 Deferrable Reports or screens are functional, but not displaying as they should be.
Enhancements - 2 weeks
8 Production/Target (Failover) host resides on: FCXXXXX (Production)
ECCXXXXX (Target)
9 Application Business Criticality Assess (BCA): BCA=A
10 Operational Level Agreement (OLA) and basic host Production Host Target Host
configuration information Host Information For Site FCXXXXX: Host Information For Site ECCXXXXX:
(Production/Target): Server_Name = FCXXXXX Server_Name = ECCXXXXX
Description = MYSYS Prod (My Hub) Description = MyHub DR
Project = MYSYS PROD (MyHUB) Project = MyHUB
Status = PRODUCTION Status = PRODUCTION
OLA_Curr = 2 OLA_Curr = 2
OS = HP-UX OS = HP-UX
Org = S3 Org = S3
Server_Type = APP Server_Type = APP
Make = HP Make = HP
Model = HP SERVER RP5470 Model = N4000
SerialNo = USM41049X3 SerialNo = USM401413Y
CPU_Config = 6x650 PA-RISC Family CPU_Config = 6x650 PA-RISC Family
Memory_Config = 6144MB Memory_Config = 6144MB
Location = FMCC DATA CENTER - SUITE 1443 Location = Building 6-4 DATA CENTER
Box = 582-02 - ROOM 411
Grid = R05 Box = 4282-02
OS_SG = TSO-C/S UNIX Grid = L08
WEB_SG = OS_SG = TSO-C/S UNIX
Lease_End_Dt = 30-APR-05 WEB_SG =
Purch Type = PURCHASE Lease_End_Dt = 30-APR-04
Server_Env = PROD Purch Type =
Term Server Server_Env = PROD
MC_CS_IP = FCASXX-CON.NLS.FORD.COMTerm Server
MC_TS_IP = FMC-LC17-TRMSLO8.NLS.FORD.COM MC_CS_IP = ECCASXX-
CON.NLS.FORD.COM
MC_TS_PORT = 121 MC_TS_IP = FMC-LC17-
Additional Notes: None TRMSLO8.NLS.COM
MC_TS_PORT = 173
Additional Notes: None
11 Backup Methodology and Schedule: TSM Incremental forever 1800pm 2000pm Monday - Sunday
with Quiescent backup every 2nd Saturday 0200am 1000am
http://www.dcbackups.ford.com/
12 Recovery Time Objective (RTO): >2hrs <24hrs
13 Recovery Point Objective (RPO): <=24hrs (Last backup)
14 Supporting Supplier Name: General Electric (GXS E Series) Dirk Pitt
15 Application DR Plan Review Cycle: Annual
16 Application Architecture DR Configuration Metro-Mirror; DataGuard; TPC-R
DR002 Instructions and item description: Complete one of these high-level documents for each stand-alone application or infrastructure plan.
1 Application/Infrastructure Name: Enter the application or infrastructure name.
2 Overview of Application/Infrastructure: Enter a brief overview and description of the application/infrastructure.
3 Region and Business Unit: Enter the region and business unit name.
4 Site Name: Enter the site name where the production equipment resides.
Originator: ITO BC/DR team (disaster@ford.com) Page 10 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
5 Process Name: If your business categorizes processes or if you have a complex operation that benefits from separating an operation into several process steps,
enter the process name pertaining to this part of the DR plan.
6 Business, Application, or Infrastructure Owner: Enter the Business/Application/Infrastructure Owner (owner from plan perspective)
7 Routine problem management procedures/support window (including potential defect response time): Describe the process to handle routine problems, escalation
processes, and support (infrastructure and application). These must be validated by the owner. Information regarding application and infrastructure support may be
found on the GTAM site.
8 Production/Target (Failover) Host name(s): Identify production and target host names in the applications environment.
9 Application Business Criticality Assessment (BCA): Enter the BCA rating. Refer to the GTAM Operational Excellence site for more information.
10 Operational Level Agreement (OLA) and basic host configuration information: For each production and failover host, copy/paste information from
http://www.admintool.ford.com Identify basic OS level "core" application type/series/level (e.g. HP FUSE 11.1, Oracle 9i, and MQ-Series, etc) required for your
application.
11 Backup Methodology and Schedule: Define the specific method, schedule, off-site location of backup (Tape/VTS; Full/Incr; Daily etc.) for the
application/infrastructure. Refer to the TSM website.
12 Application/Service Recovery Time Objective (RTO): Enter the maximum allowable downtime (in hours) that the customer can be without this application/service.
This establishes recovery prioritization, and is set by the business owner during the business impact analysis process.
13 Recovery Point Objective (RPO): Data lost in terms of hours based on the backup frequency of the files and the retention period of the backup tapes. Use worst-
case scenario (tapes destroyed). This should be established during the business impact analysis.
14 Supporting Supplier Name: Enter the full contact name of the supplier here. Review your corresponding Supplier/Customer Contact List (BCP010) in your Business
Continuity Plan as appropriate. See the Ford Business Continuity site.
15 Application DR Plan Review Cycle: Enter the time period that this plan will be reviewed and updated.
16. Select any high availability solutions your application uses.

Note: These are NOT all inclusive. Include anything that you need to recover your application/infrastructure. Consider alternatives, how to access the application if
primary means are not available (manual processes or alternatives). Evaluate whether the application or infrastructure is using non-standard tools.

The items in italicized blue text are examples only, and therefore they will need to be written for your specific requirements.

Originator: ITO BC/DR team (disaster@ford.com) Page 11 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Date Prepared: Prepared by:
DR003 Vital/Critical Dependencies
EXAMPLE (Insert Application Name Here and OLA Level
Dependencies
4. Teams Response SLA/BCA
3. Primary/Alternate 5. Date DR 6. Date DR
level (application) or OLA level
1. Dependencies 2. U/D Point of Contact (POC) Information, Including Queue or Plan Last Plan Last 7. Comments
(infrastructure) (see SLA/OLA
Method to engage them Updated Tested
matrix)
WSL U John Smith Barbara Jones SLA 2 2013.01.01 2013.01.01 WSL provides access to intranet services and
Jsmith94 Bjones25 is needed for authentication purposes.
Building 6 , Bldg, 5091 Rm., C, Bldg, 5091 Rm.,
2AE02 2AE02Building 6
39-0XXXX (work) 39-0XXXX (wk)
248-987-6543 (cell) 586-123-4567 (cell)
jsmith@yahoo.com (home) bjones@yahoo.com (home)
GIRS queue NA-RACF
PAS D Paul Doitmyway Vinny DaChin N/A for downstream dependencies N/A for N/A for This application forwards Automotive Supplier
Pdoitmyway Vdchin25 downstream downstream Notification requests from the Kentucky Truck
Building 6, Bldg, 5091 Rm., Building 6 Bldg, 5091 Rm., dependencie dependencies Plant for parts requisition into Facility B for
2AE02 2AE02 s outbound processing to the Visteon supply
39-0XXXX (work) 39-0XXXX (work) network.
248-987-6543 (cell) 586-123-4567 (cell)
pway@yahoo.com (home) vdachin@yahoo.com (home)
GIRS queue NA-RACF
DR003 Instructions and item description: Complete one of these high-level documents for each stand-alone application or infrastructure plan.
1. Dependencies list all vital/critical upstream and downstream applications or infrastructure that affect or are impacted by this DR plan. These may also be associated to resources listed in
column five below.
2. Indicate whether the dependency listed in column 1 is an upstream or downstream application/infrastructure.
a. Upstream applications/infrastructure are required for production (Example: WSL, CMMS, etc).
b. Downstream applications/infrastructure are dependent on your application for production (DB2, Oracle, PeopleSoft, HR Online, GCS etc)
3. Primary/Alternate Point of Contact (POC) Identify the primary and alternate POC name, CDSID, address, phone, email, queue and method used to contact the person or team responsible for
supporting the dependency listed in column one.
4. Indicate what the dependencys SLA is to you. If the SLA is greater than what is expected of your application, indicate this in RED. N/A for downstream dependencies. For more information
regarding SLA/OLA please see the SLA/OLA matrix.
5. Indicate when the dependencys DR Plan was last updated. If it has never been updated, or was last updated over a year ago, indicate this in RED. N/A for downstream dependencies
6. Indicate when the dependencys DR Plan was last tested. If it has never been tested, or was last tested over a year ago, indicate this in RED. NA for downstream dependencies
7. Comments Use this space for any additional details which may help expand on details not provided elsewhere on this form.
Note: These are NOT inclusive. Include anything that you need to recover your application/infrastructure. Consider alternatives, such as how to access the application if primary means are not available
(manual processes or alternatives). Evaluate whether the application or infrastructure is using non-standard tools. The items in italicized blue text are examples only, and therefore they will need to be
written for your specific requirements.

Originator: ITO BC/DR team (disaster@ford.com) Page 12 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Date Prepared: Prepared by:
DR003a Vital/Critical Resources
EXAMPLE (Insert Application Name Here and OLA Level
Resources
4. Primary/Alternate
Point of Contact (POC) Information,
1. Resources 2. Purpose 3. How to Obtain/Location 5. Comments
Including Queue or Method to engage
them
Run book for server FCXXXXX Contains instructions to rebuild server Attached to plan as Appendix A John Smith Barbara Jones The run book is attached as an appendix. If link
Jsmith94 Bjones25 is down, John and Barbara were the original
Building 6, Bldg, Building 6, Bldg, 5091 authors and would have backup copies
5091 Rm., 2AE02 Rm., 2AE02
39-0XXXX (work) 39-0XXXX (wk)
248-987-6543 (cell) 586-123-4567 (cell)
jsmith@yahoo.com bjones@yahoo.com
(home) (home)
GIRS queue NA-
RACF
myalias.dearborn.ford.com Alias bound transaction redirection DNS Ticket to QIP (DNS Alias)
DR003a Instructions and item description: Complete one of these high-level documents for each stand-alone application or infrastructure plan.
1 Resources Identify any vital/critical resources that are or may be required to recover your application/infrastructure. Resources may be people, services, hardware, software, or anything that is
needed for recovery teams to perform their job when restoring applications or infrastructure. Examples include:
a. Business Continuity (BC) and Disaster Recovery (DR) plans
b. Run books, standard operating procedures
c. Business and architectural diagrams showing how data flows through the application and the infrastructure involved
d. Software or infrastructure management tools:
- Application/Database resources: WAS, VBS, ASP, SQL, Oracle, Crystal Reports, Hyperion, Business Intelligence
- Infrastructure Management Resources: Tivoli, EMC Control Center, Cisco Configuration Center, Remote Access Server FTP, Remote Console Service, Hummingbird, PuTTY etc
e. Tracking Systems (Request Center, eTracker etc)
f. Collaboration tools (File Shares, Sharepoint sites)
g. PC/Laptop/other device configuration requirements (GCS level, FUSE level, configuration etc)
2 Purpose Describe this resources purpose in this application/infrastructure DR plan
3 How to Obtain/Location Describe how to obtain this resource during an emergency
4 Primary/Alternate Point of Contact (POC) Identify the Primary and Alternate POC name, CDSID, address, phone, and email of the person responsible for providing support for resource.
5 Comments Use this space for any additional details
Note: These are NOT all inclusive. Include anything that you need to recover your application/infrastructure. Consider alternatives, how to access the application if primary means are not available
(manual processes or alternatives). Evaluate whether the application or infrastructure is using non-standard tools. The items in italicized blue text are examples only, and therefore they will need to be
written for your specific requirements.

Originator: ITO BC/DR team (disaster@ford.com) Page 13 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Instructions: Complete section DR004 to plan what would be done in the event of a real disaster. Try to account for the worst case scenario (all systems in one entire data center unavailable), and detail the steps
that would be needed to recover your application. DR004 can then be used as a template for DR004a, section 2, the steps needed to perform a Disaster Recovery test. Please remember that all the steps
necessary to recover your application may or may not be needed to perform a disaster recovery test - the Disaster Recovery test (DR004a) may be a subset of section DR004.

DR004 Disaster Recovery Team Planning Procedure


EXAMPLE (Insert Application Name Here and OLA Level)
Plan Start Date Time Planned Duration
# Task Responsibility Comments
Actual Start Date - Time Actual Duration
Recovery Team Planning Tasks
1 As appropriate travel to Time of Disaster (TOD) + 3 hours Supervisor gathers team per DR006/BC plan Gather BC plans, DR plans, and other
recovery site or alternate work one hour technical documents as appropriate
site. Review situation (architectural, business flow diagrams,
To be filled in during real To be filled in during real run books, etc)
disaster disaster
2 Establish communication with TOD + 4 hour 3 hours Required Discuss situation, establish
incident management and other List CDSID requirements and responsibilities
teams as needed. Conduct To be filled in during real To be filled in during real (conduct damage assessment,
initial damage assessment disaster disaster Alternate mitigation, and triage efforts; situation
List CDSID report status, establish initial tasks,
timing, controls, etc)
3 Identify recovery requirements, <Enter recovery teams> These may include application and
Define recovery team, review Required infrastructure SMEs such as
contact lists List CDSID application developers, database
Alternate administrators, server, storage, backup
List CDSID restore teams, suppliers, vendors
etc Review your contact lists for
details.
4 Identify recovery resources; Required Personnel, provisioning of resources,
refine recovery team and tasks. List CDSID suppliers/vendors etc
Evaluate mitigation options. Alternate Possible update in tasks as needed
List CDSID (personnel outage, etc). Begin
damage assessment and mitigation
steps.
5 Establish DR decisions, and set Required Validate BCA, OLA, RTO, RPO, and
recovery prioritization List CDSID provide situational report to upper
Alternate management as needed.
List CDSID
6 Centralize communications, Required Establish bridge line, set ground rules
status reporting, tasks List CDSID for communication of issues, reporting
Alternate of status, requesting support (GIRS,
List CDSID etc)
7 If situation, time, and resources Required Not required, recommended. Enter
permit, perform backup of List CDSID and process Request Center ticket
failover target to preserve Alternate XXXXXXX
QA/DEV state for RTNO List CDSID
Core System Level Recovery Tasks
8 Recovery resource engagement 2013.01.01 12:34 AM 2013.01.01 12:45 AM <Enter recovery teams> If appropriate, enter and process
Request Center ticket XXXXXXX to
2013.01.01 12:45 AM 2013.01.01 12:55 AM begin system level recovery
9 Notify APPNAME team and <Enter recovery teams> Send customer advisory to refrain from
customer base of recovery submitting production requests
Originator: ITO BC/DR team (disaster@ford.com) Page 14 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
DR004 Disaster Recovery Team Planning Procedure
EXAMPLE (Insert Application Name Here and OLA Level)
Plan Start Date Time Planned Duration
# Task Responsibility Comments
Actual Start Date - Time Actual Duration
status/actions beginning (Insert time until notified
that production is available)
10 Begin initial failover/recovery <Enter recovery teams> Perform system failover or tape
process (establish protocols to restore of production source to failover
mount mirror or recover data target. Ensure application, database,
from backup for production in and other essential services are
DR mode stabilized/quiet to minimize data
corruption.
11 Configure network, storage, 2013.01.01 12:40 AM 2013.01.01 12:45 AM <Enter recovery teams> Refer to recovery documentation,
memory, and file systems as configuration run books, standard
needed to accept application 2013.01.01 12:50 AM 2013.01.01 12:55 AM operating procedures, etc
data restoration from backup
12 Perform recovery of core <Enter recovery teams> Refer to recovery documentation,
presentation layer software (OS, configuration run books, standard
File Manager, Web etc.) except operating procedures, etc
database
13 Reestablish permissions, user <Enter recovery teams>
logon IDs, and passwords
14 Create a temporary mounting <Enter recovery teams> Example: mount point may be
point on (failover target) that \\FMCXXXXXXX\PROJ\XXXXX
emulates the APPNAME
directory on (production source)
15 Validate servers/systems are <Enter recovery teams> Refer to recovery documentation,
restored configuration run books, standard
operating procedures, etc
16 Hand over to database <Enter recovery teams> Enter and process Request Center
administrator for next step ticket XXXXXXX
Application/Database Level Recovery Tasks
17 Block ports, shutdown network, 2013.01.01 12:34 AM 2013.01.01 12:45 AM <Enter recovery teams> Prevents extraneous access to system
indicate Message Of The Day during restoration. Run recovery
(MOTD) that (failover target) 2013.01.01 12:45 AM 2013.01.01 12:55 AM processes from console.
is now running as (production
source) in DR mode
18 Load (production source) <Enter recovery teams> Refer to recovery documentation,
application and database configuration run books, standard
backup on (failover target). operating procedures, etc Perform
initial low level validation
19 Perform additional 2013.01.01 12:34 AM 2010.12.12 12:44 AM <Enter recovery teams> Refer to recovery documentation,
application/database level <Enter DBA, application developers, etc.> configuration run books, standard
recovery tasks 2013.01.01 12:45 AM 2010.12.12 12:55 AM operating procedures, etc Perform
initial low level validation.
20 Modify application alias or <Enter recovery teams> Enter and process Request Center
scripts for DNS/TNS to mimic <Enter DBA, application developers, etc.> ticket XXXXXXX to NOC
(production source)
21 Start up application/database <Enter DBA, application developers, etc.> Once DBA has validated the state of
and begin initial validation. <Enter recovery teams (Standby for resolution)> core database, application teams can
Originator: ITO BC/DR team (disaster@ford.com) Page 15 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
DR004 Disaster Recovery Team Planning Procedure
EXAMPLE (Insert Application Name Here and OLA Level)
Plan Start Date Time Planned Duration
# Task Responsibility Comments
Actual Start Date - Time Actual Duration
Resolve issues. begin higher level recovery validation.

Refer to recovery documentation,


configuration run books, standard
operating procedures, etc Perform
initial low level validation.
22 Validate availability of <Enter DBA, application developers, etc.> Communicate status update to
upstream/downstream <Enter recovery teams (Standby for resolution)> management or other authority as
dependencies. appropriate.

Standby to test production in Obtain status of upstream/downstream


DR mode dependencies standby for next
steps.
23 Startup network interfaces, <Enter DBA, application developers, etc.> Once directed by management or
unblock software ports. <Enter recovery teams (Standby for resolution)> other authority to proceed, begin
testing routine application/database
Test a single input/output "test" transactions.
process. If necessary, return to recovery steps
to resolve issues. Refer to recovery
Resolve issues as appropriate documentation, configuration run
books, standard operating procedures,
etc

Perform final validation. Enter and


process Request Center ticket
XXXXXXX.
Post Recovery Release to Production in DR Mode Tasks
24 Finalize application/database for 2013.01.01 12:34 AM 2013.01.01 12:44 AM <Enter DBA, application developers, etc.> Communicate status update to Critical
production release. <Enter recovery teams (Standby for resolution)> Incident and Problem Management
2013.01.01 12:45 AM 2013.01.01 12:55 AM (CPAT)/Management Control Team
(MCT) or other authority as
appropriate.

Standby to release to application


team/business owners
25 Release to application team for <Enter DBA, application developers, etc.> Communicate status update to Critical
production <Enter recovery teams (Standby for resolution)> Incident and Problem Management
(CPAT)/Management Control Team
(MCT) or other authority as
appropriate.

Enter and process Request Center


ticket XXXXXXX.
Post Recovery Return to Normal Operations Application
26 Recovery resource engagement 2013.01.01 12:34 AM 2013.01.01 12:45 AM <Enter recovery teams> If appropriate, enter and process
Request Center ticket XXXXXXX to
Originator: ITO BC/DR team (disaster@ford.com) Page 16 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
DR004 Disaster Recovery Team Planning Procedure
EXAMPLE (Insert Application Name Here and OLA Level)
Plan Start Date Time Planned Duration
# Task Responsibility Comments
Actual Start Date - Time Actual Duration
2013.01.01 12:45 AM 2013.01.01 12:55 AM begin system level recovery
27 Notify APPNAME team and <Enter recovery teams> Send customer advisory to refrain
customer base of recovery from submitting production requests
status/actions beginning (Insert time until notified
that production is available)
28 Begin initial failback/recovery <Enter recovery teams> Perform system failback or backup
process (establish protocols to restore of production source to failback
mount mirror or recover data target. Ensure application, database,
from backup for production in and other essential services are
DR mode stabilized/quiet to minimize data
corruption.
29 Configure network, storage, 2013.01.01 12:40 AM 2013.01.01 12:45 AM <Enter recovery teams> Refer to recovery documentation,
memory, and file systems as configuration run books, standard
needed to accept application 2013.01.01 12:50 AM 2013.01.01 12:55 AM operating procedures, etc
data restoration from backup
30 Perform recovery of core <Enter recovery teams> Refer to recovery documentation,
presentation layer software (OS, configuration run books, standard
File Manager, Web etc..) except operating procedures, etc
database
31 Reestablish permissions, user <Enter recovery teams>
logon IDs, and passwords
32 Create a temporary mounting <Enter recovery teams> Example: mount point may be
point on (failover target) that \\FMCXXXXXXX\PROJ\XXXXX
emulates the APPNAME
directory on
(production source)
33 Validate servers/systems are <Enter recovery teams> Refer to recovery documentation,
restored configuration run books, standard
operating procedures, etc
34 Hand over to database <Enter recovery teams> Enter and process Request Center
administrators for next step ticket XXXXXXX
Post Recovery Return To Normal Operations Database
35 Block ports, shutdown network, 2013.01.01 12:34 AM 2013.01.01 12:45 AM <Enter recovery teams> Prevents extraneous access to system
provide Message Of The Day during restoration. Run recovery
(MOTD) indicating that (failover 2013.01.01 12:45 AM 2013.01.01 12:55 AM processes from console.
target) is now running as
(production source) in DR mode
36 Load (production source) <Enter recovery teams> Refer to recovery documentation,
application and database configuration run books, standard
backup on (failover target). operating procedures, etc Perform
initial low level validation
37 Perform additional 2013.01.01 12:34 AM 2013.01.01 12:44 AM <Enter recovery teams> Refer to recovery documentation,
application/database level <Enter DBA, application developers, etc.> configuration run books, standard
recovery tasks to present for 2013.01.01 12:45 AM 2013.01.01 12:55 AM operating procedures, etc Perform
production. initial low level validation.
38 Modify application alias or <Enter recovery teams> Enter and process Request Center
Originator: ITO BC/DR team (disaster@ford.com) Page 17 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
DR004 Disaster Recovery Team Planning Procedure
EXAMPLE (Insert Application Name Here and OLA Level)
Plan Start Date Time Planned Duration
# Task Responsibility Comments
Actual Start Date - Time Actual Duration
scripts for DNS/TNS to mimic <Enter DBA, application developers, etc.> ticket XXXXXXX to NOC
(production source)
39 Start up application/database <Enter DBA, application developers, etc.> Once DBA has validated the state of
and begin initial validation. <Enter recovery team (Standby for resolution)> core database, application teams can
Resolve issues. begin higher level recovery validation.

Refer to recovery documentation,


configuration run books, standard
operating procedures, etc Perform
initial low level validation.
40 Validate availability of <Enter DBA, application developers, etc.> Communicate status update to
upstream/downstream <Enter recovery team (Standby for resolution)> management or other authority as
dependencies. appropriate.
Obtain status of upstream/downstream
Standby to test production in DR dependencies standby for next
mode steps.
41 Startup network interfaces, <Enter DBA, application developers, etc.> Once directed by management or
unblock software ports. <Enter recovery teams (Standby for resolution)> other authority to proceed, begin
testing routine application/database
Test a single input/output "test" transactions.
processes. If necessary, return to recovery steps
to resolve issues. Refer to recovery
Resolve issues as appropriate documentation, configuration run
books, standard operating procedures,
etc

Perform final validation. Enter and


process Request Center ticket
XXXXXXX.
Issues and Follow Up Tasks

DR004 Instructions and item description: Complete one of these high-level documents for each stand-alone application or infrastructure plan.
Enter all tasks, dates, and assign responsibility that is required to recover the application/infrastructure, including teams or individuals needed to perform them. Include change control tickets if
needed
Review your BC plan for contact information (BCP009 and BCP010) to ensure there are no changes since the last update. If differences exist determine correct information and update for use
during the event. There may be several different contact lists. Contact lists contain information pertaining to recovery teams, subject matter experts, suppliers, customers, or other key
contacts as appropriate
Contact information must be sufficiently comprehensive (alternative contact methods), easily accessible, and regularly updated for immediate use during emergencies
See the Business Continuity Process Guide for more information on BC and templates for contact and supplier information
During a disaster, do not initiate a task until the application/infrastructure team lead indicates that it is OK to do so

The items in italicized blue text are examples only, and therefore they will need to be written for your specific requirements.

Originator: ITO BC/DR team (disaster@ford.com) Page 18 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
DR004a - Client Server Architecture DR Test Planning Template
Instructions: Complete section DR004a to plan what will be done during a DR test. Test steps should be sequential. Some steps documented in DR004 above may be applicable to the DR test. Please remember
that not all the steps necessary to recover your application in a real disaster may be needed to perform a disaster recovery test. The Disaster Recovery test (DR004a) may be a subset of section DR004. Please
remove any steps that are not required.

DR004a Part 1 Disaster Recovery Team Testing Procedure


EXAMPLE (Insert Application Name Here Including OLA Level)

Application/Infrastructure Lead for Test: Contact Info:

Testing *Insert test scope here, i.e. This test will recreate the steps needed to test the situation where the primary server FCXXXXX is no longer available and production needs to
Scope: switch to the backup server ECCXXXXX

Servers/Databases Required For Test Location Operations Owner Shared OS Time Zone

Server - ECCXXXXX Building 6 - Unix Deployment team y Solaris 10 EST

DB SV2PHHR FMCC - GDMS team y Oracle 10g EST

Teams Required For Test Team Member Contact Info

CDSID@ford.com
Disaster Recovery Coordinator John Disaster
Phone: 313-XXX-XXXX

CDSID@ford.com
Application team Jane Application
Phone: 313-XXX-XXXX

CDSID@ford.com
Business Owner John Owner
Phone: 313-XXX-XXXX

CDSID@ford.com
Business Tester Jane Tester
Phone: 313-XXX-XXXX

CDSID@ford.com
DBA John Database
Phone: 313-XXX-XXXX

CDSID@ford.com
System Administrator Jane Administrator
Phone: 313-XXX-XXXX

CDSID@ford.com
Network John Network
Phone: 313-XXX-XXXX

Open Ticket Date Opened Purpose Owner

#NXXXXXXX 2013.01.01 Database failover Application team

Instructions:
Originator: ITO BC/DR team (disaster@ford.com) Page 19 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Do not initiate a task until the Disaster Recovery Coordinator indicates that it is OK to do so during test
Insert tasks within the plan indicating when bridge lines are opened and closed, and the required information for those bridge lines
DR004 can be used as a starting point for DR004a Part 2. Steps should be modified as required to fit scope of test
Steps in black are generally required for all client-server applications. Steps in Blue are generally optional

DR004a Part 2- Disaster Recovery Team Testing Procedure


EXAMPLE (Insert Application Name Here and OLA Level)
Plan Start Date Time Planned Duration
# Task Responsibility Comments
Actual Start Date - Time Actual Duration
Test Planning Tasks Items in black below are Required Steps
1 Notify Business Owner of the planned Pre-DR test Application team
DR test
2 If feasible, submit ticket to schedule Pre-DR test Application team Ticket should be submitted 1 week
reboot of servers involved in DR test. before reboot. Reboot should take place
Reboot should be 1-2 weeks before 1-2 weeks before test to make sure
DR test testing takes place on a clean reboot
3 Submit Request Center ticket for Pre-DR test Application team Use Web Hosting Site Administration and
WAS team support during test Information Requests. Reference this
Wiki for more information.
4 Submit Request Center ticket for DBA Pre-DR test Application team
support during test
5 Submit Request Center ticket for Pre-DR test Application team
System administrator support during
test
6 Submit a Request Center ticket Pre-DR test Disaster Recovery Coordinator Metro Mirror tests only
detailing the steps we want the SAN
team to take in order to switch Metro
Mirror sides
7 Submit a Request Center ticket Pre-DR test Disaster Recovery Coordinator Metro Mirror tests only
detailing the steps we want the SAN
team to take in order to switch Metro
Mirror back (RTNO)
8 Application team to acknowledge that Pre-DR test Application team
data/transactions made while in DR
mode will be saved when switching
back to production.
9 Approve the test date/time Pre-DR test Application team/Business Owner
10 Complete scheduled reboot requested Pre-DR test System Administrator At least 2 weeks in advance of test
by application team
11 Submit GICC ticket for test Pre-DR test Disaster Recovery Coordinator At least 2 weeks in advance of test
12 Notify the Change Control team about Pre-DR test Disaster Recovery Coordinator At least 2 weeks in advance of test
the DR test forthcoming events
13 Schedule Pre-test Go/No-go meeting Pre-DR test Disaster Recovery Coordinator Should be held 1-2 days before test
14 Schedule test plan review meeting(s) Pre-DR test Disaster Recovery Coordinator Should be scheduled at least 2 weeks
before test
15 Schedule WebEx meeting for test Pre-DR test Disaster Recovery Coordinator The test will be managed on this call

16 Conduct test plan review meeting Pre-DR test Disaster Recovery Coordinator At least 2 weeks before DR test
invite all teams required for test

Originator: ITO BC/DR team (disaster@ford.com) Page 20 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
DR004a Part 2- Disaster Recovery Team Testing Procedure
EXAMPLE (Insert Application Name Here and OLA Level)
Plan Start Date Time Planned Duration
# Task Responsibility Comments
Actual Start Date - Time Actual Duration
17 Notify end users of DR test if outage Pre-DR test Application team
is required
18 Hold go/no-go meeting Pre-DR test Disaster Recovery Coordinator Usually the day before the test
Test Tasks
19 Start a WebEx meeting This will be the start of Disaster Recovery Coordinator
the DR test
20 Verify complete backups are available System administrator CBMR backups for OLA2 boot from SAN
(including CBMR backups where SQL Metro Mirror DR tests
applicable) in case systems need to
be recovered
21 Go/No-go decision All teams
22 Start GICC records Disaster Recovery Coordinator
23 Send a message to the applicable Oracle DBA, Unix, or WAS administrator Oracle:
infrastructures bulkmail list (where gdms_orclsdba_team@bulkmail.ford.com
applicable) stating that the DR test Unix:
has begun and indicate the ESHO_UNIX_ALL@bulkmail.ford.com
Oracle/Unix/WAS servers involved in WAS: ESHO_WAS@bulkmail.ford.com
the test
24 Stop monitoring of any systems which System administrator To make sure there are no false alarms
will be brought down during test triggered during test
25 Failover application database Database administrator Be specific with server names and
XXXXXXXX from production database names
database server XXXXXXX to failover
database server XXXXXXX
26 Failover/stop IHS and JVM instances WAS administrator Be specific with server names and
on WAS servers XXXXXXX and instance names
XXXXXXX so theyre only running on
WAS servers XXXXXXX and
XXXXXXXX
27 Failover production Metro Mirror- SAN administrator Be specific with server names
configured server XXXXXXX to
failover server XXXXXXX
28 Application team performs testing in Application team
disaster recovery mode
29 Failback failover Metro Mirror- SAN administrator Be specific with server names
configured server XXXXXXX to
production server XXXXXXX
30 Failback/start IHS and JVM instances WAS administrator Be specific with server names and
on WAS servers XXXXXXX and instance names
XXXXXXX
31 Failback application database Database administrator Be specific with server names and
XXXXXXXX from failover database database names
server XXXXXXX to production
database server XXXXXXX
32 Application team performs testing in Application team

Originator: ITO BC/DR team (disaster@ford.com) Page 21 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
DR004a Part 2- Disaster Recovery Team Testing Procedure
EXAMPLE (Insert Application Name Here and OLA Level)
Plan Start Date Time Planned Duration
# Task Responsibility Comments
Actual Start Date - Time Actual Duration
production mode
33 Send a message to the applicable Oracle DBA, Unix, or WAS administrator Oracle:
infrastructures bulkmail list (where gdms_orclsdba_team@bulkmail.ford.com
applicable) stating that the DR test Unix:
has ended and indicate the ESHO_UNIX_ALL@bulkmail.ford.com
Oracle/Unix/WAS servers involved in WAS: ESHO_WAS@bulkmail.ford.com
the test
34 Start monitoring of any systems that System administrator
monitoring was disabled for that are
now back online
35 Implement and close GICC records Disaster Recovery Coordinator
Issues and Follow Up Tasks
36 Complete BCP013 Application team
37 Complete BCP014 Application team
DR004a Instructions and item description: Complete one of these high-level documents for each stand-alone application or infrastructure plan.
Enter all tasks, dates, and assign responsibility that is required to recover the application/infrastructure, including teams or individuals needed to perform them. Include change control tickets if
needed
The items in italicized blue text are examples only, and therefore they will need to be written for your specific requirements.

Originator: ITO BC/DR team (disaster@ford.com) Page 22 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Date Prepared: Prepared by:
DR005 Alternate Hardware Site Details
EXAMPLE (Insert Application Name Here and OLA Level
Alternate Site
1. Building Location 3. Room
2. Primary/Alternate Facilities Point of Contact 4. Comments
Floor, Suite, and Room Number Type
Alvan Fontenot Paul Parton
Building 6, 20600 Rotunda Dr. 1-313-3226517 1-313-3901700
Pager PIN 07954305 Pager PIN 02014375 This is where the failover/target computer
Dearborn, MI 48121 Data Center
afonteno@ford.com Pparton1@ford.com equipment is housed
Building 6 Data Center

Frank DAmore Rich Capra


FMCC, One American Road 1-313-323-6875 1-313-323-1960
fdamore@ford.com rcapra@ford.com This is where the failover/target computer
Dearborn, MI 48126 Data Center
equipment is housed
FMCC Data Center

DR005 Instructions and item description: Complete one of these high-level documents for each stand-alone application or infrastructure plan.
1. Identify the street address and room location of the alternate hardware site. Both Building 6 and FMCC locations are provided as examples. Remove the building that does not apply to your
application/infrastructure.
2. Identify the primary and alternate points of contact (POC) for the facility in case the recovery team needs to contact them. Provide primary and alternate methods of contacting them (phone,
email, pager working hours and after hours as appropriate). These are the people you will need to contact in the event you physical access to the alternate hardware.
3. Room Type General room description (small, medium, large etc)
4. Comments - Use this space for any additional details which may help expand on details not provided elsewhere on this form.
Alternate site considerations
Is the site equipped for supporting vital and critical operations (redundant or separate power, cooling, telecommunications, information technology grids)
Is equipment readily available to deliver, stage, and present to operations? Is it aligned with your OLA?
Alternate site information: See glossary entry for more information
*Note: Some conference rooms may have inactive phone and network outlets. Check with the facilities owner to find out how to activate in an emergency.
The items in italicized blue text are examples only, and therefore they will need to be written for your specific requirements.

Originator: ITO BC/DR team (disaster@ford.com) Page 23 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Date Prepared: Prepared by:
DR006 Disaster Recovery Command Center (DRCC)
EXAMPLE (Insert Application Name Here and OLA Level

1. Building Location 3. Room 4. Seating 5. Network 6. Phone 7. Electrical


2. Primary/Alternate Points of Contact 8. Comments
Floor, Suite, and Room Number Type Capacity Outlets Outlets Outlets

Primary (Incident happens during


normal working hours and doesnt Name (CDSID) Name (CDSID)
Sm Conf 1 small table
affect normal working space): Work phone Work phone 3 *2 1 6
Rm *Wireless Access capable network
Building Cell phone Cell phone
Room number
Second (Incident happens off hours) WebEx info: NA NA NA NA NA Bridge call, all users work from home
Name (CDSID) Name (CDSID)
Third (Incident happens during work
Work phone Work phone Sm Conf 1 small table
hours and affects primary location): 6 *2 1 4
Cell phone Cell phone Rm *Wireless Access capable network
Refer to your BC plan
Alternate e-mail Alternate e-mail
DR006 Instructions and item description: Complete one of these high-level documents for each stand-alone application or infrastructure plan.
1. Identify the building name and room location of the primary (work hours), secondary (non-work hours) and third (Reference your BC location) sites where the application/infrastructure team will
meet to restore services
2. Identify the primary and alternate points of contact for the recovery. Provide multiple ways of contacting them.
3. Room Type General room description (small, medium, large etc)
4. Seating Capacity List the maximum allowable capacity for this room
5. *Network Outlets List the numbers of installed (Primary Active/Standby Inactive) network ports available. Indicate in if wireless is available in comments column.
6. *Phone List the number of installed phone outlets that available. Indicate the room phone number if available.
7. Electrical List the number of electrical outlets available.
8. Team Requirements/Comments - Use this space for any additional details which may help expand on details not provided elsewhere on this form.

Disaster Recovery Command Center (DRCC) Considerations


Evaluate how much space is required for your team
Who will be using the DRCC? Will all teams report here, who needs to use this facility? Can existing conference rooms, think tanks, training facilities be used?
Are sites equipped to adequately communicate with teams? Are there sufficient wireless access points (saturation points evaluated), network drops, video conferencing equipment, phones,
printers, copiers, etc available to crisis teams?
Are personal facilities available (potable water, sleeping quarters, kitchenettes, vending facilities, etc) to support crisis teams who may need to work extended hours?
Are there private rooms available to discuss matters which must not be released as a means to work through crisis solving processes?
Does this site have access to recovery plans, diagrams, contact information, general awareness, or other vital or critical resources needed to manage/mitigate the crisis at hand?
Establish clear communication management procedures and responsibilities. How are communications performed (received, documented, controlled)?
Establish clear command and control management procedures and responsibilities (tasks, responsibilities, authority). Who has overall authority (recovery efforts, information release etc)?
*Note: Some conference rooms may have inactive phone and network outlets. Check with the facilities owner to find out how to activate in an emergency.
The items in italicized blue text are examples only, and therefore they will need to be written for your specific requirements.

Originator: ITO BC/DR team (disaster@ford.com) Page 24 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Date Prepared:
DR007 Disaster Recovery Plan Maintenance Prepared by:
Schedule
1. Core components 2. Corresponding 3. Responsibility 4. Suggested 5. Date
Form Frequency Completed

Application/Infrastructure Profile DR002 *Annual


Quarterly
Vital/Critical Dependencies DR003
Quarterly
Vital/Critical Resources DR003a
*Annual
Disaster Recovery Team Planning Procedure DR004
*Annual
Disaster Recovery Team Testing Procedure DR004a
*Annual
Alternate Hardware Site Details DR005
*Annual
Disaster Recovery Command Center DR006
Quarterly
Disaster Recovery Plan Maintenance Schedule DR007
*Annual
Disaster Recovery Plan History Log DR008

*Annual
Disaster Recovery Distribution Log DR009

The following items are included in your Business Continuity (BC) plan, and may be required while executing your Disaster Recovery plan. Please
make sure these BC forms are updated per the schedule, and are kept offsite with your Disaster Recovery plans

Quarterly
Contact List/Calling Tree BCP009
Quarterly
Supplier/Customer Contact List BCP010
Annual
Wallet Cards BCP011

As a member of IT, the following activities can be conducted simultaneously for Disaster Recovery and Business Continuity. Please make sure
these activities are conducted at least annually.
*Annual
Training and Awareness BCP012
*Annual
Call Tree Testing BCP013, BCP014
DR007 Instructions and item description: Complete one of these high-level documents for each stand-alone application or infrastructure plan.
*Annual = Item should be reviewed at least annually (more often if significant changes occur).
1. Add core components to this list that are specific to your plan where applicable.
2. BC plan templates are here
3. Indicate who is responsible to complete the review and maintenance of this section.
4. Indicate the frequency this section is reviewed (e.g. monthly, quarterly, semi-annually, and annually). Do not wait for the recommended
frequency to perform a needed revision.
5. Indicate dates as sections are completed.

Maintenance considerations:
Maintenance cycles should tie to testing cycles (usually updates to your plans are required following a test)
Vital and critical business processes may drive more frequent review and updates to DR plans (e.g. quarterly).
Building access and important team contact information can change frequently. Building privileges, personnel changes, phone
numbers, email, and pagers should be reviewed and updated quarterly.
Documents may include, contact cards, awareness, recovery tasks, run books, web or network-based documentation, or regulatory
references.

Originator: ITO BC/DR team (disaster@ford.com) Page 25 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Date Prepared: DR008 Disaster Recovery Plan History Log Prepared by:

2. Date 4. General description of


1. Author 3. Version 5. Approver
Created Revised change(s)

DR008 Instructions and item description: Complete one of this form for each application/infrastructure DR plan
1. Indicate the CDSID of the originator/modification owner of the DR Plan
2. Indicate the date (created/revised)
3. Indicate the version (see example in instructions above).
4. Describe the changes made to the current version
5. Indicate the approver of the DR plan

History log considerations:


Use this form to define responsibilities, frequency, and maintenance of core Disaster Recovery plan components.
Use this form to track significant changes to your plan.
The format for version tracking may be as follows: Year, Quarter (e.g., 6.2 would represent Year 2006, Quarter 2). Only note significant
changes, not simple things like a phone number or email address change.
Change description should be general in nature yet meaningful to help explain changes to the reader.
To maintain separation of duties the creator/reviser/approver should be separate individuals.
Changes can be submitted by anyone. Consider using standard change recommendation and submission procedures (see optional
change request form at end of this document).

Originator: ITO BC/DR team (disaster@ford.com) Page 26 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Originator: ITO BC/DR team (disaster@ford.com) Page 27 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Originator: ITO BC/DR team (disaster@ford.com) Page 28 of 30
Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Date Prepared: DR009 Disaster Recovery Distribution Log Prepared by:
DR009 Instructions and item description: Complete one of this form for each application/infrastructure DR plan
1. Copy number Control of documents being released (e.g. 1 of 5, 2 of 5 etc).
2. Date distributed Date documents are distributed.
3. Recipient CDSID Ford Motor Company Corporate Directory System (personal) Identification of the individual the document was
distributed to.
4. Alternate/Off-site location Location (include room, cabinet, drawer etc) where the document is stored.

Distribution log considerations:


Personal contact information (personal phone numbers, email address, home address etc) is considered PII (personally
identifiable information) and should be protected.
Information about suppliers, including commitments, alternate suppliers, guaranteed service level agreements or supplier
personal contact information could be considered confidential also. Be sure this document is in compliance with any supplier non-disclosure
agreements.
Do not discuss information that may risk strategic initiatives in the DR plan. Limit the information provided (data center locations,
access, security routines, etc) to only what is needed.
Distribute DR plans on a need to know basis. Recover superseded material for proper disposal to limit the use of outdated
information.
Do not include passwords, PINs, location of keys, etc. Instead, substitute the point of contact (group or individual) responsible
for maintaining that type of information. In some cases, there may be a need to employ a "two person integrity" system to provide
separation of duties and to prevent any one person from having full access to sensitive information (e.g. split a password into two sealed
envelopes to be opened only by authorized agents.
Do not store the document in a vehicle.
Use this form to track your document distribution. If appropriate, consider a one-to-one replacement to reduce use of outdated
versions.
If a management-specific version (containing full contact information) exists, note that information here.
Consider releasing only the pages that have changed more frequently, such as contact information quarterly, but the base plan
annually.
Tell recipients what to do with the new and outdated documents (e.g. destroy old copy or replace specific pages, and destroy old
pages).
If you store your DR plan on a portable electronic device (laptops, USB, CD/DVDs etc), test access to ensure all team
members can access it in an emergency. Use password and/or file encryption as appropriate when working with electronic document
distribution or portable electronics containing vital/critical documents such as DR plans.
Update all off-site versions of the vital/critical documentation such as DR plans, run books, diagrams etc. Consider
synchronizing documents stored on support team laptops using a scheduled automation script. This will help provide an effective means of
organizing changes to information and minimizing the effort in distribution maintenance.
Ensure vital/critical work-in-process documents are stored and backed up regularly in the event the authors can provide more up-
to-date recovery details.

Originator: ITO BC/DR team (disaster@ford.com) Page 29 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013
Change Request Form (Optional)

Requestor Name/CDSID Department/Organization Department/Organization


DR Coordinator

DR plan version Section of change being Date Submitted


requested
(e.g. DR002)
Description of Change

Reason for Change

Impact on Business Continuity/DR Plan

Change request decision Comments:


Accepted
Rejected
Disaster Recovery Plan Team Manager Date

Originator: ITO BC/DR team (disaster@ford.com) Page 30 of 30


Confidential
Date Issued: 10/31/2006
Date Revised: 06/12/2013

You might also like