Bug 9 - nnbench 测试 (edit)
Status: CONFIRMED (edit)
Alias: None (edit)
(show other bugs)
1.0
:
()
Depends on: ()
Blocks: ()
 
Reported: 2023-06-13 02:48 UTC by
Modified: 2023-06-13 02:49 UTC (History)

0 users (edit)
(never email me about this bug)
(add)

Current Est.: %Complete: Gain:
0.0 0.0 + 0 0.0
Summarize time (including time for bugs blocking this bug)

Attachments

:

Status:
of
[tag] [reply] [−] Description 2023-06-13 02:48:21 UTC
nnbench用于测试NameNode的负载,它会生成很多与HDFS相关的请求,给NameNode施加较大的压力。
    这个测试能在HDFS上创建、读取、重命名和删除文件操作。
	
 hadoop jar /usr/hadoop-parafs/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar nnbench -help           
    NameNode Benchmark 0.4
    Usage: nnbench <options>
    Options:
        -operation <Available operations are create_write open_read rename delete. This option is mandatory>
         * NOTE: The open_read, rename and delete operations assume that the files they operate on, are already available. The create_write operation must be run before running the other operations.
        -maps <number of maps. default is 1. This is not mandatory>
        -reduces <number of reduces. default is 1. This is not mandatory>
        -startTime <time to start, given in seconds from the epoch. Make sure this is far enough into the future, so all maps (operations) will start at the same time. default is launch time + 2 mins. This is not mandatory>
        -blockSize <Block size in bytes. default is 1. This is not mandatory>
        -bytesToWrite <Bytes to write. default is 0. This is not mandatory>
        -bytesPerChecksum <Bytes per checksum for the files. default is 1. This is not mandatory>
        -numberOfFiles <number of files to create. default is 1. This is not mandatory>
        -replicationFactorPerFile <Replication factor for the files. default is 1. This is not mandatory>
        -baseDir <base DFS path. default is /becnhmarks/NNBench. This is not mandatory>
        -readFileAfterOpen <true or false. if true, it reads the file and reports the average time to read. This is valid with the open_read operation. default is false. This is not mandatory>
        -help: Display the help statement


	注意:在执行open_read, rename and delete ,先要执行create_write创建所需要的数据;每个一个case执行完,都用delete清空测试环境。
	
	 1. 创建文件并写入数据
	 
       1> 使用4个mapper和2个reducer来创建1000个文件,每个文件10MB
          hadoop jar /usr/hadoop-parafs/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar nnbench -operation create_write -maps 4 -reduces 2 -bytesToWrite 10485760 -numberOfFiles 50 -replicationFactorPerFile 3 -readFileAfterOpen true 

	   2> 查看写入的结果:
	      cat NNBench_results.log
		  
	   
	 2. 读文件
	 
       1> 使用4个mapper和2个reducer来读1000个文件
          hadoop jar /usr/hadoop-parafs/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar nnbench -operation open_read -maps 4 -reduces 2 -bytesToWrite 10485760 -numberOfFiles 50 -replicationFactorPerFile 3 

	   
	   2> 查看写入的结果:
	      cat NNBench_results.log 
		  
		  
	   
	 3. rename文件
	 

       1> 使用4个mapper和2个reducer来重命名1000个文件
          hadoop jar /usr/hadoop-parafs/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar nnbench -operation rename -maps 4 -reduces 2 -bytesToWrite 10485760 -numberOfFiles 50 -replicationFactorPerFile 3 -readFileAfterOpen true 

	   
	   2> 查看写入的结果:
	      cat NNBench_results.log
		  
	   
	  4. delete文件
	 
       1> 使用4个mapper和2个reducer来删除1000个文件
          hadoop jar /usr/hadoop-parafs/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar nnbench -operation delete -maps 4 -reduces 2 -bytesToWrite 10485760 -numberOfFiles 50 -replicationFactorPerFile 3 
  
     
	   2> 查看写入的结果:
	      cat NNBench_results.log