Processing 1 Terabyte of Text in 7 Seconds without Hadoop


Interesting post from Silvius Rus from the Cluster Team at Quantcast.

He implemented a simple Sawzall program to process 1 TB of text in 7 seconds starting from disk and did not use Hadoop (though he mentions that Quantcast’s proprietary MapReduce cluster is loosely based on Hadoop).

He also made a very interesting design decision –  to drop the sort phase of MapReduce and run the Reducer concurrently with the Mapper.


via Tumblr

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s