Chapter 1 Meet Hadoop
Data is large, the transfer speed is not improved much. It's a long time to read all data from one single disk - writing is even more slow. The obvious way to reduce the time is read from multiple disk once.
The first problem to solve is hardware failure. The second problem is that most analysis task need to be able to combine the data in different hardware.

Chapter 3 The Hadoop Distributed Filesystem
Filesystem that manage storage across a network of machines are called distributed filesystem. HDFS is a filesystem designed for storing very large files with streaming data access patterns, running on clusters of commodity hardware.
HDFS is built around the idea that the most efficient data processing pattern is a write-once, read-many-times pattern.
HDFS is not a good fit today: low-latency data access, lots of small files, Multiple writers, arbitrary file modifications.

HDFS concepts
1) Blocks:
HDFS has the concept of a block, but it is a much larger unit—64 MB by default. HDFS blocks are large compared to disk blocks, and the reason is to minimize the cost of seeks.
Having a block abstraction for a distributed filesystem brings several benefits. The first benefit is the most obvious: a file can be larger than any single disk in the network. Second, making the unit of abstraction a block rather than a file simplifies the storage subsystem. Furthermore, blocks fit well with replication for providing fault tolerance and availability.

2) Namenodes and Datanodes
The namenode manages the filesystem namespace. It maintains the filesystem tree and the metadata for all the files and directories in the tree.
It is important to make the namenode resilient to failure, and Hadoop provides two mechanisms for this.The first way is to back up the files that make up the persistent state of the filesystem metadata. The second way is to run a secondary namenode, which despite its name does not act as a namenode.
Datanodes are the workhorses of the filesystem. They store and retrieve blocks when they are told to (by clients or the namenode), and they report back to the namenode periodically with lists of blocks that they are storing.

3) HDFS Federation
The namenode keeps a reference to every file and block in the filesystem in memory, which means that on very large clusters with many files, memory becomes the limiting factor for scaling. HDFS Federation, introduced in the 2.x release series, allows a cluster to scale by adding namenodes, each of which manages a portion of the filesystem namespace. For example, one namenode might manage all the files rooted under /user, say, and a second namenode might handle files under /share.

Hadoop has an abstract notion of filesystem, of which HDFS is just one implementation.

Data Folw
1) Anatomy of a File Read

2) Anatomy of a File Write