What is column families? What happens if you alter the block size of ColumnFamily on an already populated database?
Explain why the name ‘hadoop’?
Explain the wordcount implementation via hadoop framework ?
Explain the hadoop-core configuration?
How indexing is done in HDFS?
What is hadoop framework?
How does master slave architecture in the hadoop?
How will you make changes to the default configuration files?
Can you tell us more about ssh?
Is a job split into maps?
shouldn't DFS be able to handle large volumes of data already?
Can you give us some more details about ssh communication between masters and the slaves?
what factors the block size takes before creation?
Which files are used by the startup and shutdown commands?
What is partioner in hadoop? Where does it run,mapper or reducer?