You are on page 1of 7

Rama

__________________________________________________________________________________

Professional Summary

 Overall 6+ years of experience as an IT professional in software development which


includes 2+ years of experience in Big-data related technologies on various domains
like Retail, Insurance and Telecomm

 Working experience in Apache Hadoop ecosystem components like HDFS, Map


Reduce, Pig, Hive, Impala, HBase, SQOOP, Flume, Oozie, Spark, Kafka

 Experience in working with major Hadoop distributions like Cloudera 5.x and
Hortonworks HDP 2.2 and above

 Worked on Cloudera Impala and Apache Spark for real-time analytical processing

 Experience in optimizing Map Reduce Programs using combiners, partitioners and


custom counters for delivering the best results

 Experience in writing Pig and Hive scripts and extending the core functionality by
writing custom UDF’s

 Good knowledge on File formats like sequence File, RC, ORC, Parquet and
compression techniques like, gzip, snappy and LZO

 Extensively worked on Hive and Impala

 INTEGRATION with various Hadoop Eco-System Tools:

▪ Integrated Hive and HBase for better performance

▪ Integrated Impala and HBase for real-time analytics

▪ Integrated Hive and Spark SQL for high performance

▪ Did spark and HBase Integration for OLTP

▪ Integrated Kafka-spark streaming for high efficiency throughput and


reliability

 Worked on Apache Flume for collecting and aggregating huge amount of log data
and stored it on HDFS for doing further analysis

 Experience in Importing Traditional RDBMS data to HDFS Using Sqoop and


Exporting data from HDFS to RDBMS to generate reports

 Experience in writing both time and data driven workflows using Oozie

 Solid understanding of algorithms, data structures and object-oriented programming

 Knowledge on NoSQL columnar databases like HBase and Cassandra


 Experience in managing and troubleshooting Hadoop related issues

 Good knowledge and understanding of Java and Scala programming languages

 Knowledge on Linux and shell scripting

 Diverse experience in utilizing Java tools in business, Web, client-server platforms


using core java, JSP, Servlets, Spring, Structs, Hibernate, Java Database Connectivity
(JDBC) and application servers like Apache Tomcat

 Improved the performance and optimization of the existing algorithms in Hadoop


using SparkContext, Spark-SQL, Data Frame, Pair RDD's, Spark YARN

 Hands on experience in working on Spark SQL queries, Data frames, import data
from Data sources, perform transformations, perform read/write operations, save the
results to output directory into HDFS.

 Implemented POC’s using Kafka, spark Streaming and Spark SQL

 Knowledge in using SQL Queries for backend database analysis

TECHNICAL SKILLS

Big data/Hadoop HDFS, Map Reduce, YARN, HIVE,Impala, PIG, Sqoop,


Oozie, Flume, Zookeeper, HBASE,Kafka
Apache Spark Spark Core, Spark SQL, Spark Streaming

Hadoop Distributions Cloudera and Hortonworks.

Java/J2EE Technologies Java, J2EE, Servlets, JDBC, XML, AJAX, REST


Frameworks Struts 2/1, Hibernate, Spring
Methodologies Agile, UML, Design Patterns (Core Java and J2EE)
Programming Languages Java, Scala, C, Linux shell scripts
NoSQL DB Technologies HBase, Cassandra, MongoDB
Database Oracle, MySQL, Hive
Web Servers Tomcat
Web Technologies HTML5, CSS, XML, JavaScript
Operating Systems Ubuntu (Linux), Win 95/98/2000/XP, Mac OS,cent os
Other Tools Eclipse, InjelliJ,gEdit,MAVEN,SBT,
GIT,SVN,Jira,Confluence,MRUnit,JUnit,Hue,
MobaXterm

PROFESSIONAL EXPERIENCE
Client: AT&T Inc:
Role: Hadoop Consultant
Description: Due to high maintenance cost and low performance, AT&T started moving
their traditional data from RDBMS(Oracle) to HDFS as a part of migration. Moreover as the
volume of data is growing huge day-by-day,they wanted to increase the performance in
analyzing the data

Roles and Responsibility:

• Played a senior Hadoop developer role and involved in all the phases of the project, starting
from POC’s till implementation

• Involved in data migration activity using Sqoop JDBC drivers for oracle and IBM db2
connectors

• Worked on full and incremental imports and created Sqoop jobs

• Exported the analyzed data to the relational databases using Sqoop for visualization and to
generate reports For the BI team.

• Involved in loading data from local file system (Linux) to HDFS.

• Involved in running Hadoop jobs to process Terabytes of xml format data.

• Validated data using pig scripts to eliminate the bad records

• Created data model for structuring and storing the data efficiently. Implemented
partitioning and of tables in HBase.

• Involved in creating Hive tables, loading the data and writing Hive queries which will run
internally in map reduce way.

• Worked with various Hadoop file formats, including ORC and parquet.
• Involved in integration of Hive and HBase.

• Implemented bucketing, partitioning and other query performance tuning techniques.

• Tested Apache(TM) Tez, an extensible framework for building high performance batch and
interactive data processing applications, on Hive jobs.

• Wrote Java API for HBase transactions on HBase tables and involved in building Oozie
work flows.

• To read files we used xml parsing technique in spark by writing code in Scala(POC).
• Designed, documented standard operational pprocedures using confluence.
Environment: Hortonworks, Hadoop, Hive, Sqoop, HBase, MapReduce, HDFS, Pig,
Tez,Cassandra, Java, Oracle 11g/10g, FileZilla, Unix Shell Scripting,Spark,Scala

Client: Liberty Mutual:


Role: Hadoop Consultant
Description:Liberty Mutual has chosen Hadoop platform, as it provides scalable, flexible
and cost effective infrastructure.This project integrates unstructured data for analytics with
existing traditional enterprise data systems.

Roles and Responsibility:

• Involved in copying data generated by various telematic devices to HDFS for further
processing using Flume.

• Loaded data from LINUX file system to HDFS and created separate directory for every
four hour window.

• Extensively used Pig for data cleansing and other validations.

• Used Oozie and Zookeeper operational services for coordinating cluster and scheduling
workflows.

• Modeled Impala partitions extensively for data separation to perform faster processing of
data, and followed best practices for tuning.

• Wrote complex Impala queries using aggregate and windowing functions.

• Loading the data from the different Data sources like (Teradata, DB2) into HDFS using
Sqoop and load into Impala tables, which are partitioned.

• Involved in integration of Impala and HBase

• Stored the customers data onto HBase for further transactions and historical trip data onto
Impala.
• Hands on experience in exporting the results into relational databases using Sqoop for
visualization and to generate reports for the BI team using MSTR .

• Hands on experience in reviewing and managing Hadoop log files.

• wrote Java API (REST) web services


Environment: Hadoop(CDH5), Linux,HDFS, Map Reduce, Sqoop, Impala, Pig,
Oozie,HBase, MSTR SVN,Teradata,IBM Db2,Eclipse.

Client:Retail: IRC
Role: Hadoop Consultant
Description:In this project we collect the sales data to analyze and to generate reports that
helps in enhancement of business growth.

Roles and Responsibility:

• Loaded data from LINUX file system to HDFS using shell script.

• Extensively used Pig for validations.

• Hands on writing Map Reduce code to make unstructured data as structured data and for
inserting data into HBase from HDFS.

• Performed optimization on existing MapReduce programs by using customized partitioner,


combiner

• Optimized Map/Reduce Jobs to use HDFS efficiently by using various compression


mechanisms.

• Wrote CLI commands using HDFS.

• Performed data analysis on large datasets of product, period, ,store and sales data.

• Added Log4j to log the errors.

• Used Eclipse for writing code and Git for version control.

• Involved in creating Impala tables, loading with data and writing Impala queries for real-
time analytical processing

• Worked on Oozie workflow engine for job scheduling.

• Monitored the health of Map Reduce Programs which are running on the cluster.
• Developed Impala queries to process the data and generate the data cubes for visualizing
and reports.

Environment: Hadoop, Linux, CDH4, CDH5, MapReduce, HDFS, Impala, Pig, Shell
Scripting, Java, NoSQL, Eclipse, Oracle, Git.

Client : FiveAxioms Inc Dec 2012– Oct 2013


Role:Java Developer
Assignment Manager(CRM) :
Responsibilities:
▪ Involved in various phases of Software Development such as modeling, system
analysis and design, code generation and testing using AGILE Methodology.
▪ Participated in daily stand up meetings.
▪ Designed and Developed web interface in J2EE framework using Struts framework
(MVC Controller) HTML as per Use Case specification.
▪ Involved in developing JavaScript for client data presentation and, data validation on
the client side with in the forms.
▪ Created connection through JDBC and used JDBC statements to call stored
procedures.
▪ Produced visual models of the system by generating UML use-case diagrams from
the requirements.
▪ Designed, developed and deployed application using Eclipse and Tomcat application
Server.
▪ Classes are designed by using Object oriented Design(OOD) concepts like
encapsulation, inheritance etc.
▪ Created Custom Tags to rescue the common functionality.
▪ Participated and review of the module using the user requirement documents.
▪ Involved in testing the module as per user requirements.

Environment: Java , Eclipse2.0, Struts1.2, JDBC, JSP, Servlets, HTML,


JavaScript,hibernate.

Client : INTONE NETWORKS April 2011– Nov 2012


Java developer
Responsibilities:
▪ Involved in various phases of Software Development Life Cycle (SDLC) as design
development and unit testing.
▪ Involved in development of business domain concepts into Use Cases, Sequence
Diagrams, Class Diagrams, Component Diagrams and Implementation Diagrams
▪ Implemented various J2EE Design Patterns such as Model-View-Controller.
▪ CSS and JavaScript were used to build rich internet pages.
▪ Involved in developing code as per requirements.
▪ Used JDBC to connect the web applications to Databases.
▪ Developed PL/SQL, Stored Procedures for handling database in SQL.
▪ Developed and deployed UI layer logics using JSP, JavaScript, and HTML.
▪ Maintained, developed and fixed bugs for applications.
Environment: Java, J2ee, Struts, Eclipse, Oracle, HTML, JDBC, AJAX, JavaScript.

Academic Details:
Bachelors in Computer Science Engineering GPA: 3.82/4.0
JNT University, Hyderabad, INDIA

Certifications
Oracle Certified professional (Java SE6 Programmer)

You might also like