You are here
The Hadoop Map/Reduce framework harnesses a cluster of machines and executes user defined Map/Reduce jobs across the nodes in the cluster. On itasca, a script exists to create an ephemeral Hadoop cluster on the set of nodes assigned by the scheduler. The script setup_cluster will format a HDFS filesystem on the local scratch disks.
This resource is best-suited for application benchmarking, and algorithm testing. All data must be moved to HDFS after the cluster is brought up when the jobs starts. Any data that you wish to save must be moved to your home directory before the job completes. Many job scripts will follow the pattern:
- Set up cluster
- move data to hdfs with "hadoop fs -put"
- execute test program
- move data to home directory with "hadoop fs -get"
If you need a persistent cluster for your work, please see the information at: https://www.msi.umn.edu/hpc/red
To run this software in a Linux environment run the commands:
#!/bin/bash -l # #PBS -m n #PBS -l nodes=4:ppn=8 #PBS -l walltime=24:00:00 #PBS -q batch # cd $PBS_O_WORKDIR module load hadoop setup_cluster start-all.sh sleep 90 time hadoop jar \ $HADOOP_HOME/hadoop-examples-1.0.3.jar \ randomwriter random_example \ $HADOOP_HOME/scripts/random.xml