-
Notifications
You must be signed in to change notification settings - Fork 602
MapReduce Usage
This page describes how to use the MongoDB Hadoop Connector with vanilla MapReduce.
-
Obtain the MongoDB Hadoop Connector. You can either build it or download the jars. The releases page also includes instructions for use with Maven and Gradle. For MapReduce, all you need is the "core" jar.
-
Get a JAR for the MongoDB Java Driver.
-
Each node in the cluster will need to have access to the MongoDB Hadoop Connector JARs as well as the JAR for the MongoDB Java Driver. You can provision each machine in the cluster with the necessary JARs in
$HADOOP_HOME/share/hadoop/common
, for example, or you may use the HadoopDistributedCache
to distribute the JARs to pre-existing nodes. This is easily done using the-libjars
option in thehadoop jar
command:hadoop jar \ -libjars mongo-hadoop-core.jar,mongo-java-driver.jar \ MyJob.jar com.mycompany.HadoopJob
See the instructions on the releases page for how to include the Mongo Hadoop Connector jars easily in your MapReduce project code.
The MongoDB Hadoop Connector can do input/output with a live MongoDB cluster or BSON files (such as ones generated by mongodump
:
Direction | MongoDB | BSON |
---|---|---|
Input | MongoInputFormat | BSONFileInputFormat |
Output | MongoOutputFormat | BSONFileOutputFormat |
These formats are found in the com.mongodb.hadoop
package. For Mapreduce 1.x, these classes are in the com.mongodb.hadoop.mapred
package.
There are a number of examples for writing MapReduce jobs using the MongoDB Hadoop Connector.