MCQs on Advanced HDFS Commands | Hadoop HDFS

Master Advanced HDFS Commands for efficient management of distributed systems. Explore fsck for health checks, advanced file operations, handling DataNode failures, and managing permissions and ownership directly through the command line interface (CLI).


1. Using fsck Command to Check HDFS Health

  1. What is the purpose of the hdfs fsck command?
    • A) To check DataNode configurations
    • B) To perform HDFS health checks
    • C) To replicate blocks
    • D) To delete corrupted files
  2. Which option with fsck displays detailed block information?
    • A) -move
    • B) -files
    • C) -blocks
    • D) -replication
  3. What does the hdfs fsck command report?
    • A) Metadata and user access details
    • B) File system status and block health
    • C) NameNode uptime
    • D) DataNode performance
  4. How can you use fsck to fix under-replicated blocks?
    • A) By running hdfs fsck -fix
    • B) By restarting NameNode
    • C) By enabling dynamic replication
    • D) By increasing the replication factor manually
  5. What does the CORRUPT status in fsck output indicate?
    • A) Replication is too high
    • B) Block files are missing or inaccessible
    • C) DataNode is overloaded
    • D) File permissions are incorrect
  6. How do you exclude non-critical errors in the fsck report?
    • A) Use -ignore
    • B) Use -status
    • C) Use -locations
    • D) Use -show

2. Advanced HDFS File Management

  1. Which command moves a file within HDFS?
    • A) hdfs dfs -moveToLocal
    • B) hdfs dfs -mv
    • C) hdfs dfs -cp
    • D) hdfs dfs -rm
  2. What does hdfs dfs -copyToLocal do?
    • A) Copies files from HDFS to the local filesystem
    • B) Copies files between HDFS directories
    • C) Moves files from HDFS to local storage
    • D) Creates backup copies in HDFS
  3. How can you upload files from the local system to HDFS?
    • A) hdfs dfs -moveFromLocal
    • B) hdfs dfs -put
    • C) hdfs dfs -copyToLocal
    • D) hdfs dfs -fetch
  4. Which command is used to rename a file in HDFS?
    • A) hdfs dfs -rename
    • B) hdfs dfs -mv
    • C) hdfs dfs -cp
    • D) hdfs dfs -edit
  5. What is the difference between -put and -copyFromLocal?
    • A) -put requires HDFS privileges, -copyFromLocal does not
    • B) -put overwrites by default, -copyFromLocal does not
    • C) Both commands are identical
    • D) -put is faster than -copyFromLocal
  6. How do you recursively copy a directory in HDFS?
    • A) hdfs dfs -copyDir
    • B) hdfs dfs -cp
    • C) hdfs dfs -cp -r
    • D) hdfs dfs -recursive
  7. What happens when you run hdfs dfs -rmr on a directory?
    • A) Deletes only the specified files
    • B) Deletes the directory recursively
    • C) Archives the directory
    • D) Deletes directory metadata
  8. Which option allows you to append data to an existing HDFS file?
    • A) -appendToFile
    • B) -addToFile
    • C) -editFile
    • D) -extendFile
  9. How do you view the contents of an HDFS file without downloading it?
    • A) hdfs dfs -read
    • B) hdfs dfs -cat
    • C) hdfs dfs -head
    • D) hdfs dfs -open

3. DataNode Failures and Recovery Procedures

  1. What happens to data stored on a failed DataNode?
    • A) It is automatically deleted
    • B) It is replicated to other nodes
    • C) It becomes inaccessible permanently
    • D) It is moved to a backup directory
  2. Which HDFS process ensures data availability during DataNode failures?
    • A) Automatic failover
    • B) DataNode rebalancing
    • C) Block replication
    • D) SecondaryNameNode synchronization
  3. How can you manually decommission a DataNode?
    • A) By updating the hdfs-site.xml file
    • B) By restarting the NameNode
    • C) By disabling dynamic replication
    • D) By stopping the SecondaryNameNode
  4. What command is used to check the status of DataNodes?
    • A) hdfs dfsadmin -report
    • B) hdfs dfs -ls
    • C) hdfs fsck -nodes
    • D) hdfs admin -datanode
  5. How does HDFS handle a full DataNode disk?
    • A) DataNode is shut down
    • B) Blocks are moved to other nodes
    • C) Replication factor is increased
    • D) Data is deleted from the node
  6. What is the first step in recovering a failed DataNode?
    • A) Restart the NameNode
    • B) Restart the DataNode service
    • C) Delete corrupted blocks
    • D) Decrease replication factor
  7. What does HDFS safe mode enable during failure recovery?
    • A) Enables block edits
    • B) Disables write operations temporarily
    • C) Enables data movement
    • D) Resets DataNode logs

4. Managing Permissions and Ownership via CLI

  1. Which command changes the ownership of an HDFS file?
    • A) hdfs dfs -setOwner
    • B) hdfs dfs -chown
    • C) hdfs dfs -perm
    • D) hdfs dfs -own
  2. How do you modify the permissions of an HDFS file?
    • A) hdfs dfs -chmod
    • B) hdfs dfs -setPerm
    • C) hdfs dfs -chperm
    • D) hdfs dfs -modPerm
  3. What does the rwx format in HDFS file permissions mean?
    • A) Read, write, execute
    • B) Replication, write, execute
    • C) Read, write, delete
    • D) Read, write, copy
  4. Which user can modify all files in HDFS?
    • A) The DataNode admin
    • B) The HDFS superuser
    • C) The NameNode user
    • D) Any authorized user
  5. What does the command hdfs dfs -ls -R do?
    • A) Lists all files recursively
    • B) Displays file ownerships only
    • C) Shows hidden files in HDFS
    • D) Lists files sorted by replication
  6. How do you ensure only specific users can access a file in HDFS?
    • A) By changing file ownership
    • B) By setting appropriate permissions
    • C) By moving the file to a private directory
    • D) By encrypting the file
  7. What does the sticky bit in HDFS permissions allow?
    • A) Only the file owner can delete or modify files in a directory
    • B) Files are replicated to specific nodes
    • C) Blocks are never moved between nodes
    • D) Files are locked for all users
  8. How do you verify HDFS file ownership?
    • A) hdfs dfs -checkOwner
    • B) hdfs dfs -ls
    • C) hdfs dfs -viewOwner
    • D) hdfs dfs -stat

Answers

QnoAnswer
1B) To perform HDFS health checks
2C) -blocks
3B) File system status and block health
4A) By running hdfs fsck -fix
5B) Block files are missing or inaccessible
6A) Use -ignore
7B) hdfs dfs -mv
8A) Copies files from HDFS to the local filesystem
9B) hdfs dfs -put
10B) hdfs dfs -mv
11B) -put overwrites by default, -copyFromLocal does not
12C) hdfs dfs -cp -r
13B) Deletes the directory recursively
14A) -appendToFile
15B) hdfs dfs -cat
16B) It is replicated to other nodes
17C) Block replication
18A) By updating the hdfs-site.xml file
19A) hdfs dfsadmin -report
20B) Blocks are moved to other nodes
21B) Restart the DataNode service
22B) Disables write operations temporarily
23B) hdfs dfs -chown
24A) hdfs dfs -chmod
25A) Read, write, execute
26B) The HDFS superuser
27A) Lists all files recursively
28B) By setting appropriate permissions
29A) Only the file owner can delete or modify files in a directory
30B) hdfs dfs -ls

Use a Blank Sheet, Note your Answers and Finally tally with our answer at last. Give Yourself Score.

X
error: Content is protected !!
Scroll to Top