You are on page 1of 89

Table of Contents

Table of Contents.................................................................................................................1 1. Introduction......................................................................................................................2 2. Internet Data Center.........................................................................................................8 3. Scope for Modern Data Centers.....................................................................................10 4. Data Center Classification.............................................................................................12 5. Design Considerations...................................................................................................13 6. Energy Use.....................................................................................................................18 7. Network Infrastructure...................................................................................................20 8. Applications...................................................................................................................21 9. Technology Operation System (TOS)...........................................................................23 10. Standard Operating Process (SOP) In Data Center......................................................28 A. Clarity Case Assign / Close......................................................................................28 B. Sun Case Log............................................................................................................34 C. Netapp Case Log.......................................................................................................43 D.EMC Case Log...........................................................................................................52 11.SOP For SAN Switch...................................................................................................56 12. SOP For Storage..........................................................................................................68

1. Introduction
Reliance Communications: Reliance Communications (NSE: RCOM, BSE: 532712), formerly known as Reliance Infocomm, along with Reliance Telecom and Flag Telecom, is part of Reliance Communications Ventures (RCoVL). Reliance Communications Limited, founded by Dhirubhai H Ambani (19322002), is the flagship company of the Reliance Anil Dhirubhai Ambani Group. The Reliance Anil Dhirubhai Ambani Group currently has a net worth in excess of 64,000 crore (US$13.6 billion), cash flows of 13,000 crore ($2.8 billion), and a net profit of 8,400 crore ($1.8 billion). The Equity Shares of RCOM are listed on Bombay Stock Exchange Limited and National Stock Exchange Limited. The Global Depository Receipts and Foreign Currency Convertible Bonds are listed on Luxembourg Stock Exchange and Singapore Stock Exchange respectively. Reliance Anil Dhirubhai Ambani Group: Reliance ADAG is a conglomerate company headed by Anil Ambani and is among Indias top 3 private sector business houses, with a market capitalization of US$ 81 billion, net assets US$ 29 billion. Through its products and services, the Reliance Group touches the life of 1 in 8 Indians every single day. It has a business presence that extends to over 20,000 towns and 4.5 lakhs villages in India, and 5 continents across the world. Across different companies, the group has a customer base of over 100 million, and a shareholder base of over 12 million, among the largest in the world. The group is present in many sectors including Telecom, Capital, Power, Infrastructure, Entertainment and Health. The group is headquartered in Navi Mumbai. History: Reliance group was founded by Dhirubhai Ambani in 1966 as a polyester firm. Dhirubhai started the equity cult in India. Reliance later entered into financial

services, petroleum refining, power sector. By 2002 Reliance had grown into a 15bn$ conglomerate. After the death of Dhirubhai Ambani on July 6, 2002, Reliance was headed by his sons. The group was formed after the two feuding brothers Mukesh Ambani and Anil Ambani, split Reliance Industries. Anil Ambani got the responsibility of Reliance Infocomm, Reliance Energy, Reliance Capital and RNRL. This led to a new beginning called RELIANCE. Later this group entered the power sector through Reliance Power and the entertainment sector by acquiring Adlabs. Anil Dhirubhai Ambani: Anil Ambani (born 4 June 1959) is an Indian business baron and chairman of Reliance Group. Anil's elder brother, Mukesh Ambani, is also worth more than 29 billion dollars, and owns another company called Reliance Industries. As of 2010, he is the fourth richest Indian with a personal wealth of $13.7 billion, behind Mukesh Ambani, Lakshmi Mittal and AzimPremji. He is a member of the Board of Overseers at the Wharton School of the University of Pennsylvania. He is also the member of the Board of Governors of the Indian Institute of Technology Kanpur; Indian Institute of Management, Ahmedabad. He is a member of the Central Advisory Committee, Central Electricity Regulatory Commission. In March 2006, he resigned. He is also the Chairman of Board of Governors of DA-IICT, Gandhinagar. He was also the Member of Parliament in Rajya Sabha.

Logo:

The RELIANCE logo was launched in 2006 by group brand ambassador Amitabh Bachan. The Reliance Apex conveys the urge for progress. Blue represents stability and Red represents dynamism. Other Brand Ambassadors of companies are

HritikRoshan for Reliance Communication,AbhishekBachchan and Sonu Nigam for Big 92.7 FM. Earlier VirenderSehwag use to be the brand ambassador of Reliance Communication then known as Reliance Infocom. Reliance Mobile is also the main sponsor of ICC Cricket World Cup and ICC World Twenty20 Cricket tournaments. Services By Reliance-ADA group: 1. Reliance Capital: Reliance Capital Limited (RCL) is a Non-Banking Financial Company (NBFC) registered with the Reserve Bank of India under section 45-IA of the Reserve Bank of India Act, 1934. As a public limited company in 1986 and is now listed on the Bombay Stock Exchange and the National Stock Exchange (India). RCL has a net worth of over Rs '3,300 crore and over 165,000' shareholders. On conversion of outstanding equity instruments, the net worth of the company will increase to about Rs 4,100 crore. It is headed by Anil Ambani and is a part of the Reliance ADA Group. Reliance Capital ranks among the top 3 private sector financial services and banking companies, in terms of net worth. Reliance Capital has interests in: Asset management. Mutual funds. Life and general insurance. Private equity and proprietary investments. Stock broking. Reliance PMS. Depository services and financial products. Consumer finance and other activities in financial services.

2. Reliance Infrastructure: Reliance Infrastructure, (BSE: 500390) formerly known as Reliance Energy and prior to that as Bombay Suburban Electric Supply (BSES), Its India's

largest private sector enterprise in power utility and its a company under the Reliance Anil Dhirubhai Ambani Group banner, one of India's largest conglomerates. The company is headed by Anil Ambani. The company's corporate headquarters is situated in Mumbai. The company is the sole distributor of electricity to consumers in the suburbs of Mumbai. It also runs power generation, transmission and distribution businesses in other parts of Maharashtra, Goa and Andhra Pradesh. Reliance Energy plans to increase its power generation capacity by adding 16,000 MW with investments of $13 billion. Metro Rail project Reliance Airport Project Reliance Sealink One Private Reliance Road Projects Pvt. Ltd. Consultancy Projects Power Transmission Projects

3. Reliance Natural Resources Limited: Reliance Natural Resources Limited (BSE: 532709) is an Indian energy company involved in sourcing, supply and transportation of gas, coal and liquid fuels. The company was incorporated on 24 March 2000 and went public on 25 July 2005. It is a part of It is a part of the Reliance Anil Dhirubhai Ambani Group. Reliance Natural Resources is considering merging with Reliance Power. 4. Reliance Power: Reliance Power Limited (BSE: 532939) a part of the Reliance Anil Dhirubhai Ambani Group, was established to develop, construct and operate power projects in the domestic and international markets. Reliance Energy Limited, an Indian private sector power utility company along with the Anil Dhirubhai Ambani Group promotes Reliance Power. Along with its subsidiaries, it is presently developing 13 medium and large-sized power projects with a combined planned installed capacity of 33,480

MW.Reliance Natural Resources is already being merged with Reliance Power in 2010. 5. Reliance BIG Entertainment: Movies and Television Reliance Pictures, Reliance Media Works Ltd (formerly Adlabs), Reliance Media World (formerly Lowry Digital), Reliance Synergy, Reliance Animation, BIGFlix - Movies on rent, BIG Cinemas, BIG Music & Video, Reliance Home Video, Reliance ND Studio, RMW Studio Broadcasting BIG FM 92.7 - Radio stations operating in more than 50 cities, Reliance Broadcasting (RBNL), Reliance Digicom, Reliance Digital TV BIG TV DTH service. Gaming: Zapak, Codemasters, Jump Games. Internet: BIGADDA - social networking site, BIGOYE.com 6. Reliance Communications: Reliance Telecommunication Limited (RTL): In July 2007, the company announced it was buying US-based managed Ethernet and application delivery services company Yipes Enterprise Services for a cash amount of 1200 crore (the equivalent of US$300 million). The deal was announced of the overseas acquisition, the Reliance group has amalgamated the United States-based Flag Telecom for $210 million (roughly 950 crore). RTL operates in Madhya Pradesh, West Bengal, Himachal Pradesh, Orissa, Bihar, Assam, Kolkata and Northeast, offering GSM services. Reliance Globalcom: RGL owns the worlds largest private undersea cable system, spanning 65,000 km seamlessly integrated with Reliance Communications. Over 110,000 km of domestic optic fiber provides a robust Global Service Delivery Platform,

connecting 40 key business markets in India, the Middle East, Asia, Europe, and the U.S. Reliance Internet Data Center (RIDC): RIDC provides Internet Data Center (IDC) services located in Mumbai, Bangalore, Hyderabad and Chennai. Spread across 650,000 sqft (60,000 m2) of hosting space, it offers IT infrastructure management services to large, medium and small enterprises. It is one of the leading data center service providers in India and provides services like colocation, managed server hosting, virtual private server and data security. It has launched cloud computing services, offering product under its infrastructure as a server (Iaas) and software as a service (Saas) portfolio, which enables enterprises, mainly small and medium, a cost-effective IT infrastructure and application on pay-per-user model. Reliance Big TV Limited: Reliance Big TV launched in August 2008 and thereafter acquired 1 million subscribers within 90 days of launch, the fastest ramp-up ever achieved by any DTH operator in the world. Reliance Big TV offers its 1.7 million customers DVD-quality pictures on over 200 channels using MPEG-4 technology. Reliance Infratel Limited (RITL): RITLs business is to build, own and operate telecommunication towers, optic fiber cable assets and related assets at designated sites, and to provide these passive telecommunication infrastructure assets on a shared basis to wireless service providers and other communications service providers under long-term contracts.

2. Internet Data Center


A data center (or data centre or datacentre) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices. History: Data centers have their roots in the huge computer rooms of the early ages of the computing industry. Early computer systems were complex to operate and maintain, and required a special environment in which to operate. Many cables were necessary to connect all the components and methods to accommodate and organize these were devised, such as standard racks to mount equipment, elevated floors, and cable trays (installed overhead or under the elevated floor). Also, old computers required a great deal of power, and had to be cooled to avoid overheating. Security was important computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised. During the boom of the microcomputer industry, and especially during the 1980s, computers started to be deployed everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, companies grew aware of the need to control IT resources. With the advent of client-server computing, during the 1990s, microcomputers (now called "servers") started to find their places in the old computer rooms. The availability of inexpensive networking equipment, coupled with new standards for network cabling, made it possible to use a hierarchical design that put

the servers in a specific room inside the company. The use of the term "data center," as applied to specially designed computer rooms, started to gain popular recognition about this time, The boom of data centers came during the dot-com bubble. Companies needed fast Internet connectivity and nonstop operation to deploy systems and establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called Internet data centers (IDCs), which provide businesses with a range of solutions for systems deployment and operation. New technologies and practices were designed to handle the scale and the operational requirements of such largescale operations. These practices eventually migrated toward the private data centers, and were adopted largely because of their practical results. As of 2007, data center design, construction, and operation is a wellknown discipline. Standard Documents from accredited professional groups, such as the Telecommunications Industry Association, specify the requirements for data center design. Well-known operational metrics for data center availability can be used to evaluate the business impact of a disruption. There is still a lot of development being done in operation practice, and also in environmentally-friendly data center design. Data centers are typically very expensive to build and maintain. For instance, Amazon.com's new 116,000 sq.ft. (10,800 m2) data center in Oregon is expected to cost up to $100 million.

3. Scope for Modern Data Centers


IT operations are a crucial aspect of most organizational operations. One of the main concerns is business continuity; companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations, in order to minimize any chance of disruption. Information security is also a concern, and for this reason a data center has to offer a secure environment which minimizes the chances of a security breach. A data center must therefore keep high standards for assuring the integrity and functionality of its hosted computer environment. This is accomplished through redundancy of both fiber optic cables and power, which includes emergency backup power generation. Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to: Operate and manage a carriers telecommunication network Provide data center based applications directly to the carriers customers Provide hosted applications for a third party to provide services to their customers Provide a combination of these and similar data center applications. Effective data center operation requires a balanced investment in both the facility and the housed equipment. The first step is to establish a baseline facility environment

suitable for equipment installation. Standardization and modularity can yield savings and efficiencies in the design and construction of telecommunications data centers. Standardization means integrated building and equipment engineering. Modularity has the benefits of scalability and easier growth, even when planning forecasts are less than optimal. For these reasons, telecommunications data centers should be planned in repetitive building blocks of equipment, and associated power and support (conditioning) equipment when practical. The use of dedicated centralized systems requires more accurate forecasts of future needs to prevent expensive over construction, or perhaps worse under construction that fails to meet future needs.

4. Data Center Classification


The TIA-942: Data Center Standards Overview describes the requirements for the data center infrastructure. The simplest is a Tier 1 data center, which is basically a server room, following basic guidelines for the installation of computer systems. The most stringent level is a Tier 4 data center, which is designed to host mission critical computer systems, with fully redundant subsystems and compartmentalized security zones controlled by biometric access controls methods. Another consideration is the placement of the data center in a subterranean context, for data security as well as environmental considerations such as cooling requirements. The four levels are defined, and copyrighted, by the Uptime Institute, a Santa Fe, New Mexico-based think tank and professional services organization. The levels describe the availability of data from the hardware at a location. The higher the tier, the greater is the accessibility. The levels are:

5. Design Considerations
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19 inch rack cabinets, which are usually placed in single rows forming corridors (so-called aisles) between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers to large freestanding storage silos which occupy many tiles on the floor. Some equipment such as mainframe computers and storage devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use shipping containers packed with 1,000 or more servers each; when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers). Local building codes may govern the minimum ceiling heights.

Environmental control: The physical environment of a data center is rigorously controlled. Air conditioning is used to control the temperature and humidity in the data center. ASHRAE's "Thermal Guidelines for Data Processing Environments" recommends a temperature range of 1624 C (6175 F) and humidity range of 4055% with a maximum dew point of 15C as optimal for data center conditions. The temperature in a data center will naturally rise because the electrical power used heats the air. Unless the heat is removed, the ambient temperature will rise, resulting in electronic equipment malfunction. By controlling the air temperature, the server components at the board level are kept within the manufacturer's specified temperature/humidity range. Air conditioning systems help control humidity by cooling the return space air below the dew point. Too much humidity and water may begin to condense on internal components. In case of a dry atmosphere, ancillary humidification systems may add water vapor if the humidity is too low, which can result in static electricity discharge problems which may damage components. Subterranean data centers may keep computer equipment cool while expending less energy than conventional designs. Modern data centers try to use economizer cooling, where they use outside air to keep the data center cool. Washington State now has a few data centers that cool all of the servers using outside air 11 months out of the year. They do not use chillers/air conditioners, which creates potential energy savings in the millions. There are many types of commercially available floors that offer a wide range of structural strength and loading capabilities, depending on component construction and the materials used. The general types of raised floors include stringerless, stringered, and structural platforms, all of which are discussed in detail in GR-2930 and summarized below. Stringerless Raised Floors - One non-earthquake type of raised floor generally consists of an array of pedestals that provide the necessary height for routing cables and also serve to support each corner of the floor panels. With this type of floor, there may or may not be provisioning to mechanically fasten the floor

panels to the pedestals. This stringerless type of system (having no mechanical attachments between the pedestal heads) provides maximum accessibility to the space under the floor. However, stringerless floors are significantly weaker than stringered raised floors in supporting lateral loads and are not recommended.

Raised floor Stringered Raised Floors - This type of raised floor generally consists of a vertical array of steel pedestal assemblies (each assembly is made up of a steel base plate, tubular upright, and a head) uniformly spaced on two-foot centers and mechanically fastened to the concrete floor. The steel pedestal head has a stud that is inserted into the pedestal upright and the overall height is adjustable with a leveling nut on the welded stud of the pedestal head. Structural Platforms - One type of structural platform consists of members constructed of steel angles or channels that are welded or bolted together to form an integrated platform for supporting equipment. This design permits equipment to be fastened directly to the platform without the need for toggle bars or supplemental bracing. Structural platforms may or may not contain panels or stringers.

Electrical power: Backup power consists of one or more uninterruptible power supplies and/or diesel generators.

To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically fully duplicated, and critical servers are connected to both the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.

A bank of batteries in a large data center, used to provide power until diesel generators can start.

Data centers typically have raised flooring made up of 60 cm (2 ft.) removable square tiles. The trend is towards 80100 cm (3139 in) void to cater for better and uniform air distribution. These provide a plenum for air to circulate below the floor, as part of the air conditioning system, as well as providing space for power cabling. Low-voltage cable routing: Data cabling is typically routed through overhead cable trays in modern data centers. But some are still recommending under raised floor cabling for security reasons and to consider the addition of cooling systems above the racks in case this enhancement is necessary. Smaller/less expensive data centers without raised flooring may use anti-static tiles for a flooring surface. Computer cabinets are often organized into a hot aisle arrangement to maximize airflow efficiency. Fire protection:

Data centers feature fire protection systems, including passive and active design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a developing fire by detecting particles generated by smoldering components prior to the development of flame. This allows investigation, interruption of power, and manual fire suppression using hand held fire extinguishers before the fire grows to a large size. A fire sprinkler system is often provided to control a full scale fire if it develops. Fire sprinklers require 18 in (46 cm) of clearance (free of cable trays, etc.) below the sprinklers. Clean agent fire suppression gaseous systems are sometimes installed to suppress a fire earlier than the fire sprinkler system. Passive fire protection elements include the installation of fire walls around the data center, so a fire can be restricted to a portion of the facility for a limited time in the event of the failure of the active fire protection systems, or if they are not installed. For critical facilities these firewalls are often insufficient to protect heat-sensitive electronic equipment, however, because conventional firewall construction is only rated for flame penetration time, not heat penetration. There are also deficiencies in the protection of vulnerable entry points into the server room, such as cable penetrations, coolant line penetrations and air ducts. For mission critical data centers fireproof vaults with a Class 125 rating are necessary to meet NFPA 75 standards. Security: Physical security also plays a large role with data centers. Physical access to the site is usually restricted to selected personnel, with controls including bollards and mantraps. Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information on any of the systems within. The use of finger print recognition man traps is starting to be commonplace.

Biometric & Key card access

Surveillance Cameras

6. Energy Use
Energy use is a central issue for data centers. Power draw for data centers ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building. For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center. By 2012 the cost of power for the data center is expected to exceed the cost of the original capital investment. Greenhouse gas emissions:

In 2007 the entire information and communication technologies or ICT sector was estimated to be responsible for roughly 2% of global carbon emissions with data centers accounting for 14% of the ICT footprint. The US EPA estimates that servers and data centers are responsible for up to 1.5% of the total US electricity consumption, or roughly .5% of US GHG emissions, for 2007. Given a business as usual scenario greenhouse gas emissions from data centers is projected to more than double from 2007 levels by 2020. Siting is one of the factors that affect the energy consumption and environmental effects of a datacenter. In areas where climate favors cooling and lots of renewable electricity is available the environmental effects will be more moderate. Thus countries with favorable conditions, such as Finland, Sweden and Switzerland, are trying to attract cloud computing data centers. In an 18-month investigation by scholars at Rice Universitys Baker Institute for Public Policy in Houston and the Institute for Sustainable and Applied Infodynamics in Singapore, data center-related emissions will more than triple by 2020. Energy efficiency: The most commonly used metric to determine the energy efficiency of a data center is power usage effectiveness, or PUE. This simple ratio is the total power entering the data center divided by the power used by the IT equipment. Power used by support equipment, often referred to as overhead load, mainly consists of cooling systems, power delivery, and other facility infrastructure like lighting. The average data center in the US has a PUE of 2.0, meaning that the facility uses one Watt of overhead power for every Watt delivered to IT equipment. State-of-the-art data center energy efficiency is estimated to be roughly 1.2. Some large data center operators like Microsoft and Yahoo! have published projections of PUE for facilities in development; Google publishes quarterly actual efficiency performance from data centers in operation.

The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the Eco label, a data center must be within the top quartile of energy efficiency of all reported facilities.

7. Network Infrastructure
Communications in data centers today are most often based on networks running the IP protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world. Redundancy of the Internet connection is often provided by using two or more upstream service providers

Some of the servers at the data center are used for running the basic Internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers.

An example of "rack mounted" servers.

Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, etc. Also common are monitoring systems for the network and some of the applications. Additional off site monitoring systems are also typical, in case of a failure of communications inside the data center.

8. Applications
The main purpose of a data center is running the applications that handle the core business and operational data of the organization. Such systems may be proprietary and developed internally by the organization, or bought from enterprise software vendors. Such common applications are ERP and CRM systems.

A data center may be concerned with just operations architecture or it may provide other services as well. Often these applications will be composed of multiple hosts, each running a single component. Common components of such applications are databases, file servers, application servers, middleware, and various others.

Multiple racks of servers, and how a data center commonly looks.

Data centers are also used for off site backups. Companies may subscribe to backup services provided by a data center. This is often used in conjunction with backup tapes. Backups can be taken of servers locally on to tapes. However, tapes stored on site pose a security threat and are also susceptible to fire and flooding. Larger companies may also send their backups off site for added security. This can be done by backing up to a data center. Encrypted backups can be sent over the Internet to another data center where they can be stored securely. For disaster recovery, several large hardware vendors have developed mobile solutions that can be installed and made operational in very short time. Vendors such as

Cisco Systems, Sun Microsystems, IBM, and HP have developed systems that could be used for this purpose.

9. Technology Operation System (TOS)


Technology operation system (TOS) is a framework of IDC technology to track all the customer cases to plan, implement, measure, and fine tune the operational efficiency. TOS framework ensures the integration of various ITIL processes with the

technology operation such as incident management, problem management, change management and capacity management along with its best practices. TOS framework has two major components called Case Logging Portal (CLP) and Clarity System. The case logging & tracking is done using CLP whereas case resolution & confirmation is done using Clarity system. Operational Definitions: A. Service Issues: P1S1:Severe problem negatively impacting IDCs ability / Customers reputation to conduct business, and/ or impacting mission critical business functions of Customers. There is no acceptable work around and business risk/loss is high. Impact is widespread and a significant business loss is occurring for the Customer. Needs Chief Operations Officers Signoff to upgradea Case from P1S2 to P1S1. P1S2:Severe Problem affecting the production environment and/or impacting mission critical business functions. A Large Group of endusers are being impacted and/or business functions are limited. A work around may be available. Impact is widespread, impacting at least one business function of Customer. Revenue is impacted severely. This includes incidents related to Virus outbreaks, Security Breaches etc. Needs Chief Engineers Signoff to upgrade a Case from P1S3 to P1S2.

P1S3:Problem having business impact. System or business functions are inconvenienced but still functional, a work around is available. Incident may have impact on revenue. This includes Emergency requests for Critical Security Patch Updates, Virus Definition Updates, etc. NeedsExecutive Engineers Signoff to upgrade a Case from P2 to P1S3.

P2:Problem having low business impact due to technical issues. System or business functions are inconvenienced but still functional, a work around is available. P8:Problem having low business impact due to production issues. System or business functions are inconvenienced but still functional, a work around is available. This includes, all issues reported thru Customer, Helpdesk, TAM and other sources for Production Operations (e.g. Backup Services). B. Service Requests: P0: A Proactive case for service issue generally captured thru monitoring tools like HPOV, NMS,EG, etc. This also includes Administrative Tasks as requested by customers. P3: Indicates a request from Customer for Remote Hands & Eye (RHE) support activity. P4: MACD = Modify / Add / Change / Delete Request received from Customer (NonRHE). P5: MACD = Modify / Add / Change / Delete Request generated internally at Towers. Needs Executive Engineers Signoff on successful closure of the P5 case. P6: Indicates a request from IDC to external vendor for tracking the replacement of defective spares. Needs Executive Engineers Signoff on successful closure of the P6 case. P7: Indicates a request from OMS for execution of a specific service / services under a SPF/SDF. Needs Chief Engineers Signoff on successful closure of the P7 case.

C. Visit Desk: P9: Indicates a request from Customer for issue of Work Order to allow entry of their nominee atIDC and to get the out gate passes (OGP) to move the material out of the IDC. D. Solutions: P10: Indicates a request for Technical Solution to a specific customer / instance during presales.Needs Chief Operations Officers Signoff on successful closure of the P10 case. E. PPC: P11: Indicates a request for production planning and control (PPC) activity. Includes an ActiveBackup Policy / Storage Allocation. F. Customer Selfservice: P12: Indicates an Issue / Request raised directly by Customer thru CNM Portal. The Helpdesk reassignsthe cases to respective workgroup with correct Case Code, Cause Code and details. Needs onduty EMs Signoff on successful closure of the P12 case.

Case workflow:

10. Standard Operating Process (SOP) In Data Center


A. Clarity Case Assign / Close. Step1- Login into clarity software by using Username & password

Step 2- Go fault work flow option in fault manager

Step 3 After Entering fault work flow you will see following window. In this choose your Proper work group, Link category, Timer & click on SET .

Step- Enroll the issue in Description Tab & also enter the ETR (Estimated Time Required) for given case.

Step- Enter the Status of the case.

Step - Click on save button to save the change.

B. Sun Case Log Assumption: - It is assumed before logging for any of the case we know the serial number of the Device. Or cases can be logged with our CSI number which is 17083868 1. The site for logging the case is https://support.oracle.com/CSP/ui/flash.html

2. Please provide the valid credentials in the site logging portal. For us they are Username:- Nikhil.ayare@relianceada.com Password:-Ril.1.123 After logging in to this

3. Now the process to crate Sr is click on the create SR tab in the Dashboard.

4. Now first step is to select your product by putting the valid serial number.

5 Click next and provide the contact details.& also provide operating system and the valid Platform number.

6. Click next and provide the problem summary and the Severity level & than click next.

7. Than it will check its knowledgebase and provide some document so that we can resolve this issue by ourselves. Click next

8. Now upload the valid files and logs for the TSE So that it can help the Oracle Engineer to resolve the issue.

9. After Submitting the logs just answer the last three questions and submit the SR.

C. Netapp Case Log


Assumption: - It is assumed before logging for any of the case we know the serial number of the device. 1. The site for logging the case is https://now.netapp.com.

2. Please provide the valid credentials in the site logging portal. For us they are Username: - idc_techsupport Password: - tech123 After logging in to this, now process is to create SR. Click on Create Service Request Tab.

3. Now the process to crate case. Click on the Open case tab in the Dashboard.

4. Provide valid Serial Number.

5. Click on Go to open the case for that Device.

6. Choose the proper category & sub-category from given option.

7. Write the problem which you are facing.

After entering the problem you will see the some knowledge Base Document which will may help you to resolve your problem. After going through that document, if the problem not resolved then you can click on Problem Unresolved- Go to final Step tab.

8. Now provide your problem description in detail & create the case.

D.EMC Case Log 1. The site for logging the case is https://powerlink.emc.com

2. Please provide the valid credentials in the site logging portal. For us they are Username: - idc.techsupport@relianceada.com. Password: - ricidcpe. After logging in to this, now process is to create SR. Click on Create Service Request Tab.

3. Provide the detail information about your product & issue you are facing. A. Problem Severity B. Problem Description. C. Problem Summary etc.

4. Now click on submit button to log the case. 5. Once service request is created, Attach the log required for troubleshooting. 6. Co-ordinate with vendor for the same.

11.SOP For SAN Switch SAN Switch Login Process.


There are four levels of account access. Root - Not Recommended. Factory - Not Recommended Admin - Recommended for Administrative operations. User - Recommended for Non-administrative operations. Using Telnet:-Give user name and password to login in to switch.

Using SSH:Give the IP address for the Switch. Enter username & password.

Switchshow Command Switchshow: - command displays a switch summary and a port connection status and
summary

Switch name = SAN7_IDC2_SUN1 Fabric name = RCOMITFABRIC1 Index = 1 2 3 4 .. Slot = card number 1 to 4 Port = port number 0 to 47 Port status = Online- the port is up and running No Light - modem is not receiving light

No Sync - module is receiving light but is out of sync In Sync the module is receiving light and is in sync No Module no SFP module in this port No card no card present in this switch slot Port Proto = E-Port E-Port (Trunk port, master is Slot 4 Port 1), F-Port 21:00:00:1b:32:01:8f:68, Disabled

Check IP address of switch ipaddrshow :- command displays switch and control processors IP address and subnet
mask information
Default IP address: - 10.77.77.77 Default subnet mask: - 255.255.255.0

Check switch firmware version


Firmwareshow: - Command displays firmware version information of respective controller cards and primary or secondary position of both controller cards

Power supply status


Psshow: - Command show switch power supply status

Check switch Sensor status


sensorshow :- command displays all Power supply, Fan and Temperature sensors status and reading

Check switch temperature status


tempshow :- command show temperature status cards connected to switch Ok = indicate temperature status is ok Absent = card is not present in slot

How to take supportsave logs


1. Login to switch using admin ID and Password 2. Give command supportsave it will ask you below information OK to proceed? (yes, y, no, n): [no] : Give Yes for proceed Host IP or Host Name: Give IF address of FTP or SCP server. User Name: Give FTP user name Password: Give FUP usrs password Protocol (ftp or scp): Give protocol type for transferring file to server FTP or SCP Remote Directory: Give destination directory name to which file to be transfer

3. Check for SupportSave completed

4. Once SupportSave is completed login to FTP server go to Remote Directory /tmp And check for newly generated files with switch name SAN7_IDC2_SUN1 uploade this file to venders ftp site provided by vender .

5. In this way we will get the Logs.

How to take supportshow Logs


1. Login to switch right click on top of window pane and select change settings options

2. In Logging option select all session output and give file name in Log file name c:\SAN7_IDC2_SUN1.txt and click Apply button.

3. Give command supportshow it will generate logs in file c:\SAN7_IDC2_SUN1.txtfile

4. If log generation is complete exit from putty session and give log file generated in c:\ drive with name SAN7_IDC1_SUN1.txt file to vender for analyzing.

Storage allocation
From allocation sheet

1. Crate host group (RCOMSAPP40DB_1) in port 3N

2. Add WWN no of host to crated host group

3. Select crated host group and allocate CU: LDEV (58:01) (67:01) (88:01)

4. Same LDEV need to allocate for storage port 4N

12. SOP For Storage How to Collect Logs From The EMC Cellera & the Backend Clarion.
1. Login in to putty with the cellera ip, username and password.172.16.84.13

2. Now to collect the cellera information we will go in to the directory Cd /nas/tools/

3. Now fire the script there. /collect_support_materials it will collect all the details for the cellera

4. Now it will save the collected logs in the file /nas/var/emcsupport/support_materials_CK200083802307.101220_1156.zip

ClarionNow if we want to collect the SP collect for the Clarion than we will do the following steps. 1. Go in to the same directory /nas/tools and run the hidden file ./.get_spcollect

2. This will collect the logs for both Sp and put it in to /nas/var/log directory

This will collect all the logs from storage.

LUN Assigning & the LUN Masking In The Storage 9990


This is login GUI for the storage 9990

\Step-Just change the mode of the GUI from the view to the edit mode

Step-Now click on the port name 7R and click the add host group

Step-Now add the group name and the OS platform for this so that storage can recognize From which server platform it was connected.

Step-Now after adding host group we will add WWN in this with a name of WWN

Step-We will do the same on the 8R port also.

Step-Now we will add the LDEVS from the LDEVS box before adding any of the LDEV Please check the paths because it contains the both free and assign LDEVS.

Step-We can copy the paths from here and it will be pasted on the 8R port.

Netapp Allocation
Step -Login to Netapp storage. Step-Click on Volumes in left paneAdd Volume wizard will appear.

Step - Select Flexible radio button.

Step - Select Volume parameters such as volume name.

Step - Select the aggregate from which you want to assign space for new volume.

Step- Select volume size

Step- Confirm the parameters selected.

Step- Select Volumes Manage. On the right hand side, filter by (aggr0 volumes).

Step- Click on NFS exports Manage exports on right side, find the newly created

Volume.

Click on new volume dialog box appears.

Step - Click on add dialog box will appear, enter IP address of host to which RW . Permissions are to be granted

Step - Enter IP address for root host access.

Step - Confirm settings.

Step- Then commits.

Step- At last Mount the volume on the UNIX host

How To Allocate LUN On EVA8k Box.


1. Open the CV of EVA8k box (from remote login 10.8.51.220 and from there (172.16.37.254) 2. Go EVA8k and the go to Virtual disk and expand. 3. Select the particular host and write click on create disks. You will get the window given below.

1. Give disk name (Say Vdisk001) and size. And then select the disk group (FC or (SATA). 2. Mention the path Failover and mention the Present to host and say create disk

Now you will get the window screen of operation succeeded and say OK.

Now drop down on the host you will get the Vdisk001.See the write side Window where there is an option called Presentation.

Click on the Presentation and save changes the window and say ok.

To confirm the LUN go to the host and drop down the tab. Click the particular host and write side window click Presentation. You will able to see the particular vdisk.

You might also like