Integrating data from multiple sources isessential in the age of big data, but it can be a challenging andtime-consuming task. This handy cookbook provides dozens of ready-to-userecipes for using Apache Sqoop, the command-line interface application thatoptimizes data transfers between relational databases and Hadoop.
Sqoop is both powerful and bewildering, but with this cookbook’sproblem-solution-discussion format, you’ll quickly learn how to deploy and thenapply Sqoop in your environment. The authors provide MySQL, Oracle, andPostgreSQL database examples on GitHub that you can easily adapt for SQLServer, Netezza, Teradata, or other relational systems.
Transfer data from a single database table into your Hadoop ecosystem
Keep table data and Hadoop in sync by importing data incrementally
Import data from more than one database table
Customize transferred data by calling various database functions
Export generated, processed, or backed-up data from Hadoop to your database
Run Sqoop within Oozie, Hadoop’s specialized workflow scheduler
Load data into Hadoop’s data warehouse (Hive) or database (HBase)
Handle installation, connection, and syntax issues common to specific database vendors