The Apache™ Hadoop™ project develops open-source software for reliable, scalable, distributed computing.
"Hadoop" typically refers to the software in the project that implements the map-reduce data analysis framework, plus the distributed file system (HDFS) that underlies it.
There is a Name Node, typically you have at least one Name Node but usually you have more than one for redundancy. And that Name Node will accept the requests coming in from client applications to do some processing and it will then use some Data Nodes, and typically we have lots of Data Nodes that will share the processing work across between them. And the way they do that is they all have access to a shared file system that typically is referred to as the Hadoop Distributed File System or HDFS.
Apache Hadoop also works with other filesystems, the platform specific "local" filesystem, Blobstores such as Amazon S3 and Azure storage, as well as alternative distributed filesystems. See: Filesystem Compatibility with Apache Hadoop.
Since version 0.23, Hadoop disposes of a standalone resource manager : yarn.
This resource manager makes it easier to use other modules alongside with the MapReduce engine, such as :
- Accumulo, a sorted, distributed key/value store that provides robust, scalable data storage and retrieval.
- Ambari, A web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters which includes support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard for viewing cluster health such as heatmaps and ability to view MapReduce, Pig and Hive applications visually along with features to diagnose their performance characteristics in a user-friendly manner.
- Avro, a data serialization system based on JSON schemas.
- Cassandra, a replicated, fault-tolerant, decentralized and scalable database system.
- Chukwa: A data collection system for managing large distributed systems.
- Cascading: Cascading is a software abstraction layer for Apache Hadoop and it mainly targets Java developers. The framework has been developed to reduce the effort of writing boilerplate code by MapReduce programmers with Java skills.
- Flink, a fast and reliable large-scale data processing engine.
- Giraph is an iterative graph processing framework, built on top of Apache Hadoop
- HBase, A scalable, distributed database that supports structured data storage for large tables.
- Hive, A data warehouse infrastructure that provides data summarization and ad hoc querying.
- Mahout, a library of machine learning algorithms compatible with M/R paradigm.
- Oozie, a workflow scheduler system to manage Apache Hadoop jobs.
- Pig, a platform/programming language for authoring parallelizable jobs
- Spark, a fast and general engine for large-scale data processing.
- Storm, a system for real-time and stream processing
- Tez is an extensible framework for building high performance batch and interactive data processing applications, coordinated by YARN.
- ZooKeeper, a system for coordinating distributed nodes, similar to Google's Chubby
Top comments (0)