Talk about your brand

Share information about your brand with your customers. Describe a product, make announcements, or welcome customers to your store.

Skip to product information
1 of 1

Although you don’t need a large computinginfrastructure to process massive amounts of data with Apache Hadoop, it canstill be difficult to get started. This practical guide shows you how toquickly launch data analysis projects in the cloud by using Amazon ElasticMapReduce (EMR), the hosted Hadoop framework in Amazon Web Services (AWS).

Authors Kevin Schmidt and Christopher Phillips demonstrate best practices forusing EMR and various AWS and Apache technologies by walking you through theconstruction of a sample MapReduce log analysis application. Using code samplesand example configurations, you’ll learn how to assemble the building blocksnecessary to solve your biggest data analysis problems.


  • Get an overview of the AWS and Apache software tools used in large-scale data analysis
  • Go through the process of executing a Job Flow with a simple log analyzer
  • Discover useful MapReduce patterns for filtering and analyzing data sets
  • Use Apache Hive and Pig instead of Java to build a MapReduce Job Flow
  • Learn the basics for using Amazon EMR to run machine learning algorithms
  • Develop a project cost model for using Amazon EMR and other AWS tools

View full details