1.准备一台服务器

192.168.100.100

2.提前安装jdk

3.hadoop运行服务

NameNode            192.168.100.100

SecondaryNameNode   192.168.100.100

DataNode            192.168.100.100

ResourceManager     192.168.100.100

NodeManager         192.168.100.100

4.下载并解压hadoop

http://archive.apache.org/dist/hadoop/common/hadoop-2.7.5/hadoop-2.7.5.tar.gz

/export/server/hadoop/

5.修改配置文件

5.1  vim hadoop-2.7.5/etc/hadoop/core-site.xml

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://192.168.100.100:8020</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/export/servers/hadoop-2.7.5/hadoopDatas/tempDatas</value>

</property>

<!--  缓冲区大小,实际工作中根据服务器性能动态调整 -->

<property>

<name>io.file.buffer.size</name>

<value>4096</value>

</property>

<!--  开启hdfs的垃圾桶机制,删除掉的数据可以从垃圾桶中回收,单位分钟 -->

<property>

<name>fs.trash.interval</name>

<value>10080</value>

</property>

</configuration>

5.2  vim hadoop-2.7.5/etc/hadoop/hdfs-site.xml

<configuration>

<!-- NameNode存储元数据信息的路径,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割   -->

<!--   集群动态上下线

<property>

<name>dfs.hosts</name>

<value>/export/servers/hadoop-2.7.5/etc/hadoop/accept_host</value>

</property>

<property>

<name>dfs.hosts.exclude</name>

<value>/export/servers/hadoop-2.7.5/etc/hadoop/deny_host</value>

</property>

-->

<property>

<name>dfs.namenode.secondary.http-address</name>

<value>node01:50090</value>

</property>

<property>

<name>dfs.namenode.http-address</name>

<value>node01:50070</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas2</value>

</property>

<!--  定义dataNode数据存储的节点位置,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割  -->

<property>

<name>dfs.datanode.data.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas2</value>

</property>

<property>

<name>dfs.namenode.edits.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/nn/edits</value>

</property>

<property>

<name>dfs.namenode.checkpoint.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/snn/name</value>

</property>

<property>

<name>dfs.namenode.checkpoint.edits.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/dfs/snn/edits</value>

</property>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

<property>

<name>dfs.blocksize</name>

<value>134217728</value>

</property>

</configuration>

5.3 vim hadoop-2.7.5/etc/hadoop/hadoop-env.sh

export JAVA_HOME=jdk路径

5.4 vim hadoop-2.7.5/etc/hadoop/mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

<property>

<name>mapreduce.job.ubertask.enable</name>

<value>true</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>node01:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>node01:19888</value>

</property>

</configuration>

5.5 vim hadoop-2.7.5/etc/hadoop/yarn-site.xml

<configuration>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>node01</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.log-aggregation-enable</name>

<value>true</value>

</property>

<property>

<name>yarn.log-aggregation.retain-seconds</name>

<value>604800</value>

</property>

</configuration>

5.6  vim hadoop-2.7.5/etc/hadoop/mapred-env.sh

export JAVA_HOME=jdk路径

5.7 vim hadoop-2.7.5/etc/hadoop/slaves

localhost

6.启动服务

创建数据存放文件夹:

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/tempDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas2

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas2

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/nn/edits

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/snn/name

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/dfs/snn/edits

首次启动 HDFS 时,必须对其进行格式化操作:

bin/hdfs namenode -format

启动服务:

sbin/start-dfs.sh

sbin/start-yarn.sh

sbin/mr-jobhistory-daemon.sh start historyserver

浏览查看启动页面:

hdfs :     http://192.168.100.100:50070

yarn :    http://192.168.100.100:8088

jobhistory:  http://192.168.100.100:19888

最新文章

  1. IO口
  2. 生产/消费 发送和接收消息---基于kombu和redis交互
  3. 新手mysql 基础语法
  4. POJ 1426
  5. 爷爷辈儿的AX
  6. virtualenv and virtualenvwrapper on Ubuntu 14.04
  7. 04.spring-data-redis与Jedis整合使用
  8. 用java 删除mongodb的数据
  9. jQuery实例2
  10. 如何用 JavaScript 下载文件
  11. Kafka+Storm写入Hbase和HDFS
  12. [51nod Round 15 B ] 完美消除
  13. websocket(二)——基于node js 的同步聊天应用
  14. SmartUpload文件上传组件的使用教程
  15. [Artoolkit] Framework Analysis of nftSimple
  16. Linux修改终端显示前缀及环境变量
  17. linux下udev简介
  18. ABAP-关于隐式与显式的DB Commit
  19. 封装document.getElementById(id)
  20. php 返回数组中指定多列的方法

热门文章

  1. 算法_hdoj_1003
  2. 网页格式gbk转utf-8【python requests】
  3. THINKPHP 模板上传图片--后台接收图片
  4. 给Python初学者的一些编程技巧
  5. 更改mysql数据库默认的字符集(编码方式)
  6. if (flag)
  7. 鬼斧神工:求n维球的体积
  8. windows 动态库导出
  9. Codeforces Round #618 (Div. 1)C(贪心)
  10. Go_channel