Hadoop常用测试集HiBench配置指南

文章也同时在简书更新

引言

HiBench是intel为评估各大数据框架而设计的测试集,它可以用来测试hadoop集群对于常见计算任务的性能。从普通的排序,字符串统计到机器学习,数据库操作,图像处理和搜索引擎,都能够涵盖。本文是HiBench中hadoopbench的快速配置指南。更加具体的使用说明可以参考官方wiki

软件依赖

HiBench需要java环境,以及Maven管理。

安装java运行环境

安装JDK&JRE

1
sudo apt-get install openjdk-8-jre openjdk-8-jdk

建议装version 8不要装9。安装完成后,默认路径是/usr/lib/jvm/java-8-openjdk-amd64,如果不一致请搜索到正确路径。

添加环境变量

1
2
cd
vim .bashrc

添加如下jave的PATH:

1
2
# JAVA PATH
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

更新环境变量

1
source .bashrc

测试java环境

1
java -version

看到相应版本信息输出即表明配置正确:

1
2
3
4
hadoop@hadoop-master:~$ java -version
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)

安装Maven

下载Maven包

1
wget http://apache.fayea.com/maven/maven-3/3.5.0/binaries/apache-maven-3.5.0-bin.zip

解压缩

1
unzip apache-maven-3.5.0-bin.zip -d /YOUR/PATH/TO/RESTORE

笔者解压位置为/usr/local/

添加环境变量

1
2
cd
vim .bashrc

添加如下Maven的PATH:

1
2
3
# set maven environment
export M3_HOME=/usr/local/apache-maven-3.5.0
export PATH=$M3_HOME/bin:$PATH

更新环境变量

1
source .bashrc

测试Maven环境

1
mvn -v

看到相应版本信息输出即表明配置正确:

1
2
3
4
5
6
7
hadoop@hadoop-slave1:~$ mvn -v
Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-04T03:39:06+08:00)
Maven home: /usr/local/apache-maven-3.5.0
Java version: 1.8.0_121, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-8-openjdk-amd64/jre
Default locale: en_US, platform encoding: ANSI_X3.4-1968
OS name: "linux", version: "4.4.0-53-generic", arch: "amd64", family: "unix"

下载HiBench

git clone https://github.com/intel-hadoop/HiBench.git比较慢,建议直接网页下载zip包,然后解压到理想的目录下。

安装Hibench

切到HiBench下,执行对应的安装操作,可以选择自己想要安装的模块。以安装hadoop框架下用于测试sql的模块为例:

1
mvn -Phadoopbench -Dmodules -Psql -Dscala=2.11 clean package

更多安装命令可见https://github.com/intel-hadoop/HiBench/blob/master/docs/build-hibench.md
鉴于网络因素,建议一个模块一个模块的安装,某些模块的安装可能会非常耗时。

配置HiBench

主要配置conf/hadoop.confconf/hibench.conf两个文件。

hadoop.conf

笔者的hadoop是安装在/usr/local/hadoop-2.8.0目录下的,并且以单机localhost:9000(真分布式时使用真实的IP:port)为例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Hadoop home
hibench.hadoop.home /usr/local/hadoop-2.8.0
# The path of hadoop executable
hibench.hadoop.executable /usr/local/hadoop-2.8.0/bin/hadoop
# Hadoop configraution directory
hibench.hadoop.configure.dir /usr/local/hadoop-2.8.0/etc/hadoop
# The root HDFS path to store HiBench data
hibench.hdfs.master hdfs://localhost:9000/user/hadoop/HiBench
# Hadoop release provider. Supported value: apache, cdh5, hdp
hibench.hadoop.release apache

hibench.conf

1
2
3
4
5
6
7
8
The definition of these profiles can be found in the workload's conf file i.e. conf/workloads/micro/wordcount.conf
hibench.scale.profile tiny
# Mapper number in hadoop, partition number in Spark
hibench.default.map.parallelism 8
# Reducer nubmer in hadoop, shuffle partition number in Spark
hibench.default.shuffle.parallelism 8

主要设置测试集运行时的数据量和并发度。

运行HiBench

安装完成后,可以运行其中的测试集。首先要启动hadoop:

1
2
start-dfs.sh;
start-yarn.sh;

关于hadoop的快速配置教程请见Hadoop真分布式集群最速搭建攻略

以运行Hadoop框架下micro集的sort为例:

1
2
bin/workloads/micro/sort/prepare/prepare.sh
bin/workloads/micro/sort/hadoop/run.sh

等待读条MapReduce完毕,可以在report/sort/hadoop/bench.log处查看具体的运行结果和日志。运行结果大致是这样的:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
17/05/11 15:16:41 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
Running on 1 nodes to sort from hdfs://localhost:9000/user/hadoop/HiBench/HiBench/Sort/Input into hdfs://localhost:9000/user/hadoop/HiBench/HiBench/Sort/Output with 8 reduces.
Job started: Thu May 11 15:16:43 CST 2017
17/05/11 15:16:43 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
17/05/11 15:16:56 INFO input.FileInputFormat: Total input files to process : 8
17/05/11 15:17:11 INFO mapreduce.JobSubmitter: number of splits:24
17/05/11 15:17:15 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1494484569605_0006
...
17/05/11 15:17:27 INFO mapreduce.Job: map 0% reduce 0%
17/05/11 15:17:43 INFO mapreduce.Job: map 25% reduce 0%
...
17/05/11 15:22:40 INFO mapreduce.Job: map 100% reduce 98%
17/05/11 15:22:48 INFO mapreduce.Job: map 100% reduce 100%
7/05/11 15:23:02 INFO mapreduce.Job: Job job_1494484569605_0006 completed successfully
17/05/11 15:23:02 INFO mapreduce.Job: Counters: 51
File System Counters
...
Job Counters
...
Map-Reduce Framework
...
Shuffle Errors
...
File Input Format Counters
...
File Output Format Counters
...
Job ended: Thu May 11 15:23:02 CST 2017
The job took 378 seconds.

总结

本文是hadoop常用测试集HiBench的快速配置指南,希望对大家有所帮助。任何问题欢迎在下方留言,笔者会及时回答。

周鶏🐣(Kimiko) wechat
拿起手机扫一扫,欢迎关注我的个人微信公众号:「洛斯里克的大书库」。
坚持原创技术分享,您的支持将鼓励我继续创作!