Professional Documents
Culture Documents
Keith R. Davis Data Architect NEMSIS Project University of Utah, School of Medicine keith.davis@hsc.utah.edu
INTRODUCTION
Hadoop is an open source Apache software project that enables the distributed processing of large data sets across clusters of commodity servers.
WHAT IS HADOOP?
(2004) Google publishes the GFS and MapReduce papers (2005) Apache Nutch search project rewritten to use MapReduce (2006) Hadoop was factored out of the Apache Nutch project (2006) Development was sponsored by Yahoo
And more
Data is not stored in tables Haoop supports only forward parsing Hadoop doesnt guarantee ACID properties
Hadoop is:
Easily Scalable New cluster nodes can be added as needed Cost effective Hadoop brings massively parallel computing to commodity servers
Perform calculations with little or no data (Pi to one million places) Process data in a transactional manner Have interactive ad-hoc results (this is changing)
2.
BASIC ARCHITECTURE
Block #1
Block #2
Block #3
Block #1
Block #2
Node #1
Block #2 Block #3
Node #2
Block #1
Node #3
Block #3
Client
Data Node
Data Node
Data Node
Mapper
Data Node
Job Scheduler
...
...
Not to worry, there are many ways to access the power of MapReduce:
Hadoop Java API (If you like Java and low level stuff) Pig (If you are a script wiz and LINQ doesnt scare you) Hive (You know some SQL and coding isnt your thing) RHadoop (If R is your thing) SAS/ACCESS (If SAS is your thing)
LOOKS COMPLICATED!
Supports the concepts of databases, tables, and partitions through the use of metadata (think of views over delimited text files) Supports a restricted version of SQL (no updates or deletes) Supports joins between tables - INNER, OUTER (FULL, LEFT, and RIGHT) Supports UNION to combine multiple SELECT STATEMENTS Provides a rich set of data types and predefined functions Allows the user to create custom scalar and aggregate functions Executes queries via MapReduce Provides JDBC and ODBC drivers for integration with other applications Hive is NOT a replacement for a traditional RDBMS as it is not ACID compliant
If you use HIVE to create sample sets for your analysis, here are a few standard functions you may find useful:
round(), floor(), ceil(), rand(), exp(), ln(), log10(), log2(), log(), pow(), sqrt(), bin(), hex(), unhex(), conv(), abs(), pmod(), sin(), asin(), cos(), acos(), tan(), atan(), degrees(), radians(), positive(), negative(), sign(), e(), pi(), count(), sum(), avg(), min(), max(), variance(), var_samp(), stddev_pop(), stddev_samp(), covar_pop(), covar_samp(), corr(), percentile(), percentile_approx(), histogram_numeric(), collect_set()
Cloudera (Easy Setup) - http://www.cloudera.com/content/cloudera/en/home.html NoSQL - http://nosql-database.org/ Emulab - http://www.emulab.net/ Apache Hadoop - http://hadoop.apache.org/#Getting+Started
RHadoop - https://github.com/RevolutionAnalytics/RHadoop/wiki
SAS/ACCESS - http://www.sas.com/software/data-management/access/index.html
RESOURCES
THANK YOU!