How data or file is written into Hadoop HDFS?
Answer / Manish Chandra Yadav
Data or files are written into Hadoop HDFS using the command 'hadoop fs -put' followed by the source file path and destination HDFS path. For example: hadoop fs -put /local/input /user/username/output
| Is This Answer Correct ? | 0 Yes | 0 No |
If the source data gets updated every now and then, how will you synchronize the data in hdfs that is imported by sqoop?
How data or file is written into Hadoop HDFS?
Explain the hdfs architecture and list the various hdfs daemons in hdfs cluster?
How to read file in HDFS?
Does HDFS allow a client to read a file which is already opened for writing in hadoop?
Why HDFS stores data using commodity hardware despite the higher chance of failures in hadoop?
What happens when two clients try to access the same file on HDFS?
How does hdfs get a good throughput?
How one can format Hadoop HDFS?
What is throughput in HDFS?
Does HDFS allow a client to read a file which is already opened for writing?
How to copy a file into HDFS with a different block size to that of existing block size configuration?
Apache Hadoop (394)
MapReduce (354)
Apache Hive (345)
Apache Pig (225)
Apache Spark (991)
Apache HBase (164)
Apache Flume (95)
Apache Impala (72)
Apache Cassandra (392)
Apache Mahout (35)
Apache Sqoop (82)
Apache ZooKeeper (65)
Apache Ambari (93)
Apache HCatalog (34)
Apache HDFS Hadoop Distributed File System (214)
Apache Kafka (189)
Apache Avro (26)
Apache Presto (15)
Apache Tajo (26)
Hadoop General (407)