Apache Hadoop is a framework for running applications on large cluster built of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named Map/Reduce, where the application is divided into many small fragments of work, each of which may be executed or re-executed on any node in the cluster. In addition, it provides a distributed file system (HDFS) that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both MapReduce and the Hadoop Distributed File System are designed so that node failures are automatically handled by the framework.
General Information
- Official Apache Hadoop Website: download, bug-tracking, mailing-lists, etc.
- Overview of Apache Hadoop
- FAQ Frequently Asked Questions.
- Distributions and Commercial Support for Hadoop (RPMs, Debs, AMIs, etc)
- PoweredBy, a growing list of sites and applications powered by Apache Hadoop
- Support
- HadoopUserGroups (HUGs)
Related-Projects
- HBase, a Bigtable-like structured storage system for Hadoop HDFS
- Apache Pig is a high-level data-flow language and execution framework for parallel computation. It is built on top of Hadoop Core.
- Hive a data warehouse infrastructure which allows sql-like adhoc querying of data (in any format) stored in Hadoop
- ZooKeeper is a high-performance coordination service for distributed applications.
- Hama, a Google's Pregel-like distributed computing framework based on BSP (Bulk Synchronous Parallel) computing techniques for massive scientific computations.
- Mahout, scalable Machine Learning algorithms using Hadoop
User Documentation
- GettingStartedWithHadoop (lots of details and explanation)
- QuickStart (for those who just want it to work now)
- Command Line Options for the Hadoop shell scripts.
- Troubleshooting What do when things go wrong
Setting up a Hadoop Cluster
- HowToConfigure Hadoop software
- Performance: getting extra throughput
- Virtual Clusters including Amazon AWS
- Virtual Hadoop -the theory
- How to set up a Virtual Cluster
- Running Hadoop on AmazonEC2
- Running Hadoop with AmazonS3
Tutorials
- Running_Hadoop_On_Ubuntu_Linux_(Single-Node_Cluster) A tutorial on installing, configuring and running Hadoop on a single Ubuntu Linux machine.
- Hadoop Windows/Eclipse Tutorial: How to develop Hadoop with Eclipse on Windows.
- Yahoo! Hadoop Tutorial: Hadoop setup, HDFS, and MapReduce
MapReduce
The MapReduce algorithm is the foundational algorithm of Hadoop, and is critical to understand.
- Examples
- Benchmarks
Contributed parts of the Hadoop codebase
- These are independent modules that are in the Hadoop codebase but not tightly integrated with the main project -yet.
- HadoopStreaming (Useful for using Hadoop with other programming languages)
- DistributedLucene, a Proposal for a distributed Lucene index in Hadoop
- MountableHDFS, Fuse-DFS & other Tools to mount HDFS as a standard filesystem on Linux (and some other Unix OSs)
- HDFS-APIs in Perl, Python, PHP and other languages.
- Chukwa a data collection, storage, and analysis framework
- The Apache Hadoop Plugin for Eclipse (An Eclipse plug-in that simplifies the creation and deployment of MapReduce programs with an HDFS Administrative feature)
- HDFS-RAID Erasure Coding in HDFS
No comments:
Post a Comment
Please feel free to contact or comment the article