基于CentOS与VmwareStation10搭建hadoop环境

 

 

目  录

1.         概述.... 1

1.1.     软件准备.... 1

1.2.     硬件准备.... 1

2.         安装与配置虚拟机.... 2

2.1.     创建虚拟机.... 2

2.1.1.     创建虚拟机节点1.. 2

2.1.2.     创建虚拟机节点2.. 4

2.1.3.     创建虚拟机节点3.. 4

2.2.     安装操作系统CentOS6.0.. 4

2.3.     安装JDK.. 5

2.3.1.     准备JDK.. 5

2.3.2.     上传JDK.. 5

2.3.3.     安装JDK.. 5

2.3.4.     配置JDK环境变量.... 6

2.4.     配置网络.... 7

2.4.1.     配置hosts. 7

2.4.2.     配置网络.... 8

2.5.     配置资源与参数.... 8

2.5.1.     修改主机名称.... 9

2.5.2.     配置用户,组,目录和权.... 9

2.5.3.     修改系统内核参数/etc/sysctl.conf 9

2.6.     配置等效性与连通(可选项). 10

2.6.1.     uhadoop用户等效性.... 10

2.6.2.     uhadoop用户连通性.... 13

3.         安装hadoop.. 14

3.1.     安装配置hadoop.. 14

3.1.1.     解压hadoop安装包.... 14

3.1.2.     添加hadoop环境变量.... 14

3.1.3.     修改Hadoop配置文件.... 15

3.1.4.     格式化文件系统.... 17

3.1.5.     启动或停止服务.... 21

3.1.6.     查看Hadoop信息.... 21

4.         Hadoop-HDFS 测试.... 23

5.         参考资料.... 23

5.1参考资料.... ...................................................................................................... 23

目录

1.  概述

1.1.  软件准备

  • l  SecureCRT:用于客户机通过SSH连接LINUX
  • l  VmWareStation10:

VMware-workstation-full-10.0.1-1379776.exe

5C4A7-6Q20J-6ZD58-K2C72-0AKPE (已测,可用)

1Y0W5-0W205-7Z8J0-C8C5M-9A6MF

  • CentOS5.4: CentOS-6.0-i386-bin-DVD.iso
  • JDK: jdk-8u25-linux-i586.rpm
  • Hadoop:hadoop-2.5.2.tar.gz

1.2.  硬件准备

  • Windows环境:
  • 虚拟机环境:

2.  安装与配置虚拟机

2.1.  创建虚拟机

2.1.1.  创建虚拟机节点1

图 略

2.1.2.          创建虚拟机节点2

  操作如节点1.

2.1.3.          创建虚拟机节点3

2.2.       安装操作系统CentOS6.0

  三个虚拟机都安装,此步骤在创建虚拟机节点时:

2.3.       安装JDK

  所有节点都执行。

2.3.1.          准备JDK

  http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

2.3.2.          上传JDK

上传JDK

sftp> put E:\upload_linux\jdk-8u25-linux-i586.rpm  /home/uhadoop/uhadoop/

Uploading jdk-8u25-linux-i586.rpm to /home/uhadoop/jdk-8u25-linux-i586.rpm

100% 138487KB   9891KB/s 00:00:14     01 ETAA

2.3.3.          安装JDK

安装JDK:

[root@master uhadoop]# rpm -ivh jdk-8u25-linux-i586.rpm

Preparing...                ########################################### [100%]

1:jdk1.8.0_25            ########################################### [100%]

Unpacking JAR files...

rt.jar...

jsse.jar...

charsets.jar...

tools.jar...

localedata.jar...

jfxrt.jar...

plugin.jar...

javaws.jar...

deploy.jar...

2.3.4.          配置JDK环境变量

查看安装JDK的目录:/usr/java/

[root@master java]# cd /usr/java

[root@master java]# ll

total 4

lrwxrwxrwx. 1 root root   16 Dec  8 19:54 default -> /usr/java/latest

drwxr-xr-x. 9 root root 4096 Dec  8 19:54 jdk1.8.0_25

lrwxrwxrwx. 1 root root   21 Dec  8 19:54 latest -> /usr/java/jdk1.8.0_25

[root@master java]# cd jdk1.8.0_25

[root@master jdk1.8.0_25]# ll

total 25796

drwxr-xr-x. 2 root root     4096 Dec  8 19:54 bin

-rw-r--r--. 1 root root     3244 Sep 17 16:29 COPYRIGHT

drwxr-xr-x. 4 root root     4096 Dec  8 19:54 db

drwxr-xr-x. 3 root root     4096 Dec  8 19:54 include

-rw-r--r--. 1 root root  5025525 Sep 16 09:24 javafx-src.zip

drwxr-xr-x. 5 root root     4096 Dec  8 19:54 jre

drwxr-xr-x. 5 root root     4096 Dec  8 19:54 lib

-rw-r--r--. 1 root root       40 Sep 17 16:29 LICENSE

drwxr-xr-x. 4 root root     4096 Dec  8 19:54 man

-rw-r--r--. 1 root root      159 Sep 17 16:29 README.html

-rw-r--r--. 1 root root      524 Sep 17 16:29 release

-rw-r--r--. 1 root root 21056925 Sep 17 16:29 src.zip

-rw-r--r--. 1 root root   110114 Sep 16 09:24 THIRDPARTYLICENSEREADME-JAVAFX.txt

-rw-r--r--. 1 root root   178400 Sep 17 16:29 THIRDPARTYLICENSEREADME.txt

编辑profile

[root@master /]# cd etc

[root@master etc]# ls profile

profile

[root@master etc]# vi profile

# /etc/profile

# System wide environment and startup programs, for login setup

# Functions and aliases go in /etc/bashrc

# It's NOT good idea to change this file unless you know what you

# are doing. Much better way is to create custom.sh shell script in

# /etc/profile.d/ to make custom changes to environment. This will

# prevent need for merging in future updates.

pathmunge () {

case ":${PATH}:" in

*:"$1":*)

;;

*)

if [ "$2" = "after" ] ; then

PATH=$PATH:$1

else

PATH=$1:$PATH

fi

esac

}

if [ -x /usr/bin/id ]; then

if [ -z "$EUID" ]; then

# ksh workaround

EUID=`id -u`

UID=`id -ru`

fi

USER="`id -un`"

LOGNAME=$USER

MAIL="/var/spool/mail/$USER"

fi

# Path manipulation

if [ "$EUID" = "0" ]; then

pathmunge /sbin

pathmunge /usr/sbin

pathmunge /usr/local/sbin

else

pathmunge /usr/local/sbin after

pathmunge /usr/sbin after

pathmunge /sbin after

fi

HOSTNAME=`/bin/hostname 2>/dev/null`

HISTSIZE=1000

if [ "$HISTCONTROL" = "ignorespace" ] ; then

export HISTCONTROL=ignoreboth

else

export HISTCONTROL=ignoredups

fi

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL

for i in /etc/profile.d/*.sh ; do

if [ -r "$i" ]; then

if [ "$PS1" ]; then

. $i

else

. $i >/dev/null 2>&1

fi

fi

done

# set environment by HondaHsu 2014

JAVA_HOME=/usr/java/jdk1.8.0_25

JRE_HOME=/usr/java/jdk1.8.0_25/jre

PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tool.jar:$JRE_HOME/lib:

export JAVA_HOME JRE_HOME PATH CLASSPATH

unset i

unset pathmunge

[root@master etc]#

[root@master uhadoop]#

[root@master java]# java -version

java version "1.8.0_25"

Java(TM) SE Runtime Environment (build 1.8.0_25-b17)

Java HotSpot(TM) Server VM (build 25.25-b02, mixed mode)

2.4.       配置网络

2.4.1.          配置hosts

配置hosts文件的作用,它主要用于确定每个结点的IP地址,方便后续master结点能快速查到并访问各个节点。在3个虚机节点上均需要配置此文件:

masternode1node2中分别添加

主机名

IP址址

子网

网络类型

解析方式

Eth0

10.10.36.100

255.255.255.0

公用网络

/etc/hosts

10.10.36.100  master

NameNode

10.10.36.101  node1

DataNode

10.10.36.123  node2

DataNod

           

2.4.2.          配置网络

通过ifdown,ifup使配置的IP生效:

[root@master /]# sudo ifconfig eth0 10.10.36.100 netmask 255.255.255.0 up

[root@master /]# ifup eth0

Active connection state: activated

Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/4

[root@master /]#

[root@node1 /]# sudo ifconfig eth0 10.10.36.101 netmask 255.255.255.0 up

[root@node1 /]# ifup eth0

Active connection state: activated

Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/4

[root@node1 /]#

[root@node2 /]# sudo ifconfig eth0 10.10.36.123 netmask 255.255.255.0 up

[root@node2 /]# ifup eth0

Active connection state: activated

Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/3

[root@node2 /]# ifconfig -a

eth0      Link encap:Ethernet  HWaddr 00:0C:29:57:08:7B

inet addr:10.10.36.123  Bcast:10.10.36.255  Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fe57:87b/64 Scope:Link

UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

RX packets:11506 errors:0 dropped:0 overruns:0 frame:0

TX packets:5564 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:1090727 (1.0 MiB)  TX bytes:538552 (525.9 KiB)

Interrupt:19 Base address:0x2024

lo        Link encap:Local Loopback

inet addr:127.0.0.1  Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING  MTU:16436  Metric:1

RX packets:10 errors:0 dropped:0 overruns:0 frame:0

TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:578 (578.0 b)  TX bytes:578 (578.0 b)

[root@node2 /]#

2.5.       配置资源与参数

2.5.1.          修改主机名称

[root@master /]# vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=master

[root@node1 /]# vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=node1

[root@node2 /]# vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=node2

2.5.2.          配置用户,组,目录和权

  为hadoop集群专门设置一个用户组及用户,这部分比较简单,参考示例如下:

  sudo groupadd hadoop    //设置hadoop用户组

  sudo useradd –s /bin/bash –d /home/uhadoop –m uhadoop –g hadoop –G admin   //添加一个uhadoop用户,此用户属于  hadoop用户组,且具有admin权限。

3个虚机节点均需要进行以上步骤来完成hadoop运行帐号的建立。

[root@master /]# sudo groupadd hadoop

[root@master /]# sudo groupadd admin

[root@master /]# sudo useradd -s /bin/bash -d /home/uhadoop -m uhadoop -g hadoop -G admin [root@master /]# echo -n uhadoop|passwd --stdin uhadoop

Changing password for user uhadoop.

passwd: all authentication tokens updated successfully.

[root@node1 /]# sudo groupadd hadoop

[root@node1 /]# sudo groupadd admin

[root@node1 /]# sudo useradd -s /bin/bash -d /home/uhadoop -m uhadoop -g hadoop -G admin

[root@node1 /]# echo -n uhadoop|passwd --stdin uhadoop

Changing password for user uhadoop.

passwd: all authentication tokens updated successfully.

[root@node2 /]# sudo groupadd hadoop

[root@node2 /]# sudo groupadd admin

[root@node2 /]# sudo useradd -s /bin/bash -d /home/uhadoop -m uhadoop -g hadoop -G admin

[root@node2 /]# echo -n uhadoop|passwd --stdin uhadoop

Changing password for user uhadoop.

passwd: all authentication tokens updated successfully.

2.5.3.          修改系统内核参数/etc/sysctl.conf

[root@linuxrac1 etc]# vi sysctl.conf

# add parameter for oracle

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 1073741824

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

2.6.       配置等效性与连通(可选项)

2.6.1.          uhadoop用户等效性

1.以下均以uhadoop用户执行: 在两个节点的grid主目录分别创建.ssh目录,并赋予权限

master

[uhadoop@master ~]$ mkdir ~/.ssh

[uhadoop@master ~]$ chmod 755 ~/.ssh

[uhadoop@master ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/uhadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/uhadoop/.ssh/id_rsa.

Your public key has been saved in /home/uhadoop/.ssh/id_rsa.pub.

The key fingerprint is:

85:fb:1f:d1:01:39:f5:fd:39:f0:cc:ea:9f:02:cc:06 uhadoop@master

The key's randomart image is:

+--[ RSA 2048]----+

|            .o.  |

|         .  o. ..|

|        . .  o. o|

|         E   .=.o|

|        S + . .*.|

|         . = .. .|

|          o o.   |

|           ..o  .|

|            ..oo |

+-----------------+

[uhadoop@master ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/uhadoop/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/uhadoop/.ssh/id_dsa.

Your public key has been saved in /home/uhadoop/.ssh/id_dsa.pub.

The key fingerprint is:

cf:21:a7:cb:53:14:37:bd:eb:a8:18:20:0b:a5:19:7e uhadoop@master

The key's randomart image is:

+--[ DSA 1024]----+

|             .   |

|          . o .  |

|  . .      o . . |

| . =      .   .  |

|  = E . S.o    . |

|   o o . *..  .  |

|    .   o.o  o   |

|       ..+  . .  |

|        +...     |

+-----------------+

[uhadoop@master ~]$

node1

[uhadoop@node1 /]$ mkdir ~/.ssh

[uhadoop@node1 /]$ chmod 755 ~/.ssh

[uhadoop@node1 /]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/uhadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/uhadoop/.ssh/id_rsa.

Your public key has been saved in /home/uhadoop/.ssh/id_rsa.pub.

The key fingerprint is:

c5:13:c5:28:ef:99:4e:19:0f:8c:a9:32:03:18:0c:8f uhadoop@node1

The key's randomart image is:

+--[ RSA 2048]----+

|+         .+.    |

|.+      .....    |

|Eo.      *+      |

|. .     o.=.     |

|   .   .S. B     |

|    + .   * .    |

|     +   o       |

|          .      |

|                 |

+-----------------+

[uhadoop@node1 /]$

[uhadoop@node1 /]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/uhadoop/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/uhadoop/.ssh/id_dsa.

Your public key has been saved in /home/uhadoop/.ssh/id_dsa.pub.

The key fingerprint is:

a5:cb:21:e6:ea:6c:1b:bc:e0:6d:a0:af:c6:b2:9e:3d uhadoop@node1

The key's randomart image is:

+--[ DSA 1024]----+

|                 |

|                 |

|          .      |

|         o       |

|      o S        |

|  .. o o o       |

|....o . o        |

|o+oEo+           |

|**o=O.           |

+-----------------+

[uhadoop@node1 /]$

node2

[uhadoop@node2 /]$ mkdir ~/.ssh

[uhadoop@node2 /]$ chmod 755 ~/.ssh

[uhadoop@node2 /]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/uhadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/uhadoop/.ssh/id_rsa.

Your public key has been saved in /home/uhadoop/.ssh/id_rsa.pub.

The key fingerprint is:

a0:36:29:f4:6a:65:8c:1a:0e:6b:7b:71:c7:12:4b:ab uhadoop@node2

The key's randomart image is:

+--[ RSA 2048]----+

|                 |

|                 |

|  .   .          |

| . + = .         |

|o o @ = S        |

|o+ B * o         |

|ooo + o          |

|...E             |

| ..              |

+-----------------+

[uhadoop@node2 /]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/uhadoop/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/uhadoop/.ssh/id_dsa.

Your public key has been saved in /home/uhadoop/.ssh/id_dsa.pub.

The key fingerprint is:

a4:95:c4:8b:76:71:71:d5:c4:9b:9f:bb:cb:6d:5e:ef uhadoop@node2

The key's randomart image is:

+--[ DSA 1024]----+

|       .. .....+.|

|       .o...    o|

|       .++      o|

|      o+o      o |

|     ...S       o|

|               ..|

|                o|

|              ..=|

|               BE|

+-----------------+

以上用默认配置,一路回车即可

 

master

cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node1 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

grid@linuxrac1 ~]$ cd .ssh

[grid@linuxrac1 .ssh]$ ll

total 48

-rw-r--r-- 1 grid oinstall 2000 Sep 25 00:48 authorized_keys

-rw------- 1 grid oinstall  668 Sep 25 00:43 id_dsa

-rw-r--r-- 1 grid oinstall  604 Sep 25 00:43 id_dsa.pub

-rw------- 1 grid oinstall 1675 Sep 25 00:42 id_rsa

-rw-r--r-- 1 grid oinstall  396 Sep 25 00:42 id_rsa.pub

-rw-r--r-- 1 grid oinstall  404 Sep 25 00:48 known_hosts

node1

cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh uhadoop@master cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh uhadoop@master cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh uhadoop@master cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

node2

cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh uhadoop@master cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node1 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh uhadoop@master cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh uhadoop@master cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh uhadoop@node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

测试连通性:

[uhadoop@master ~]$ ssh 10.10.36.101

[uhadoop@node1 ~]$ ssh 10.10.36.123

[uhadoop@node2 ~]$ ssh 10.10.36.101

Last login: Sat Dec  6 23:38:05 2014 from master

[uhadoop@node1 ~]$

[uhadoop@node1 /]$ ssh 10.10.36.123

Last login: Sat Dec  6 23:38:19 2014 from node1

[uhadoop@node2 ~]$ ssh 10.10.36.100

[uhadoop@master ~]$

[uhadoop@node2 /]$ ssh 10.10.36.100

Last login: Sat Dec  6 23:39:05 2014 from node2

[uhadoop@master ~]$ ssh 10.10.36.101

Last login: Sat Dec  6 23:39:28 2014 from node2

[uhadoop@node1 ~]$

2.6.2.          uhadoop用户连通性

2.建立等效性 master,node1,node2三个节点执行

[uhadoop@master /]$ exec ssh-agent $SHELL

[uhadoop@master /]$ ssh-add

Identity added: /home/uhadoop/.ssh/id_rsa (/home/uhadoop/.ssh/id_rsa)

Identity added: /home/uhadoop/.ssh/id_dsa (/home/uhadoop/.ssh/id_dsa)

[uhadoop@master /]$ ssh node1 date

Mon Dec  8 17:46:14 PST 2014

[uhadoop@master /]$ ssh node2 date

Mon Dec  8 17:46:22 PST 2014

[uhadoop@master /]$

ssh master date; ssh node1 date; ssh node2 date

[uhadoop@node1 /]$ exec ssh-agent $SHELL

[uhadoop@node1 /]$ ssh master date

Mon Dec  8 17:44:18 PST 2014

[uhadoop@node1 /]$ ssh node2 date

Mon Dec  8 17:44:26 PST 2014

[uhadoop@node1 /]$ ssh node1 date

Mon Dec  8 17:44:37 PST 2014

[uhadoop@node1 /]$

[uhadoop@node2 /]$ exec ssh-agent $SHELL

[uhadoop@node2 /]$ ssh-add

Identity added: /home/uhadoop/.ssh/id_rsa (/home/uhadoop/.ssh/id_rsa)

Identity added: /home/uhadoop/.ssh/id_dsa (/home/uhadoop/.ssh/id_dsa)

[uhadoop@node2 /]$ ssh master date

Mon Dec  8 17:45:15 PST 2014

[uhadoop@node2 /]$ ssh node1 date

Mon Dec  8 17:45:25 PST 2014

[uhadoop@node2 /]$ ssh node2 date

The authenticity of host 'node2 (10.10.36.123)' can't be established.

RSA key fingerprint is a7:0c:1e:b8:30:6e:e9:01:e0:1e:86:5f:4a:b2:7c:cf.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node2,10.10.36.123' (RSA) to the list of known hosts.

Mon Dec  8 17:45:42 PST 2014

[uhadoop@node2 /]$

3.          安装hadoop

3.1.       安装配置hadoop

3.1.1.          解压hadoop安装包

[uhadoop@master uhadoop]$tar –zxvf hadoop-2.5.2.tar.gz

[uhadoop@master hadoop-2.5.2]$ ll

total 52

drwxr-xr-x. 2 uhadoop hadoop  4096 Nov 14 15:53 bin

drwxr-xr-x. 3 uhadoop hadoop  4096 Nov 14 15:53 etc

drwxr-xr-x. 2 uhadoop hadoop  4096 Nov 14 15:53 include

drwxr-xr-x. 3 uhadoop hadoop  4096 Nov 14 15:53 lib

drwxr-xr-x. 2 uhadoop hadoop  4096 Nov 14 15:53 libexec

-rw-r--r--. 1 uhadoop hadoop 15458 Nov 14 15:53 LICENSE.txt

-rw-r--r--. 1 uhadoop hadoop   101 Nov 14 15:53 NOTICE.txt

-rw-r--r--. 1 uhadoop hadoop  1366 Nov 14 15:53 README.txt

drwxr-xr-x. 2 uhadoop hadoop  4096 Nov 14 15:53 sbin

drwxr-xr-x. 4 uhadoop hadoop  4096 Nov 14 15:53 share

3.1.2.          添加hadoop环境变量

在三个节点都添加环境变量

[root@master /]# vi /etc/profile

# set environment by HondaHsu 2014

JAVA_HOME=/usr/java/jdk1.8.0_25

JRE_HOME=/usr/java/jdk1.8.0_25/jre

HADOOP_HOME=/home/uhadoop/uhadoop/hadoop-2.5.2

HADOOP_COMMON_HOME=$HADOOP_HOME

HADOOP_HDFS_HOME=$HADOOP_HOME

HADOOP_MAPRED_HOME=$HADOOP_HOME

HADOOP_YARN_HOME=$HADOOP_HOME

HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tool.jar:$JRE_HOME/lib:$HADOOP_HOME/lib

export JAVA_HOME JRE_HOME PATH CLASSPATH

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

[root@master /]# source /etc/profile

3.1.3.          修改Hadoop配置文件(所有节点相同)

(1)core-site.xml

[uhadoop@master hadoop]$ cd /home/uhadoop/uhadoop/hadoop-2.5.2/etc/hadoop

[uhadoop@master hadoop]$ vi core-site.xml

<configuration>

<property>

<name>hadoop.tmp.dir</name>

<value>/home/uhadoop/uhadoop/hadoop-2.5.2/tmp</value>

<description>A base for other temporary directories.</description>

</property>

 

<property>

<name>fs.default.name</name>

<value>hdfs://master:9000</value>

</property>

 

<property>

<name>io.file.buffer.size</name>

<value>131072</value>

</property>

 

<property>

<name>hadoop.proxyuser.root.hosts</name>

<value>namenode</value>

</property>

 

<property>

<name>hadoop.proxyuser.root.groups</name>

<value>*</value>

</property>

</configuration>

(2)hdfs-site.xml

[uhadoop@master hadoop]$ vi hdfs-site.xml

<configuration>

<property> 

<name>dfs.namenode.secondary.http-address</name> 

<value>master:50090</value> 

</property>

 

<property>

<name>dfs.datanode.max.xcievers</name>

<value>4096</value>

</property>

 

<property>

<name>dfs.namenode.name.dir</name>

<value>/home/uhadoop/uhadoop/hadoop-2.5.2/hdfs/name</value>

<final>true</final>

</property>

 

<property>

<name>dfs.datanode.data.dir</name>

<value>/home/uhadoop/uhadoop/hadoop-2.5.2/hdfs/data</value>

<final>true</final>

</property>

 

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

 

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

</configuration>

(3)mapred-site.xml

[uhadoop@master hadoop]$ cp mapred-site.xml.template mapred-site.xml

[uhadoop@master hadoop]$ vi mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

 

<property> 

<name>mapreduce.jobtracker.http.address</name> 

<value>master:50030</value> 

</property>

 

<property>

<name>mapreduce.jobhistory.address</name>

<value>master:10020</value>

</property>

 

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>master:19888</value>

</property>

 

<property>

<name>mapreduce.jobhistory.intermediate-done-dir</name>

<value>/mr-history/tmp</value>

</property>

 

<property>

<name>mapreduce.jobhistory.done-dir</name>

<value>/mr-history/done</value>

</property>

</configuration>

--------------------------------------------------------

jobhistory是Hadoop自带了一个历史服务器,记录Mapreduce历史作业。默认情况下,jobhistory没有启动,可用以下命令启动:

sbin/mr-jobhistory-daemon.sh start historyserver

---------------------------------------------------------

(4)yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

<property>

<name>Yarn.nodemanager.aux-services</name>

<value>mapreduce.shuffle</value>

</property>

 

<property>

<name>Yarn.resourcemanager.address</name>

<value>master:18040</value>

</property>

 

<property>

<name>Yarn.resourcemanager.scheduler.address</name>

<value>master:18030</value>

</property>

 

<property>

<name>Yarn.resourcemanager.resource-tracker.address</name>

<value>master:18025</value>

</property>

 

<property>

<name>Yarn.resourcemanager.admin.address</name>

<value>master:18041</value>

</property>

 

<property>

<name>Yarn.resourcemanager.webapp.address</name>

<value>master:8088</value>

</property>

 

<property>

<name>Yarn.nodemanager.local-dirs</name>

<value>/home/uhadoop/uhadoop/hadoop-2.5.2/mynode/my</value>

</property>

 

<property>

<name>Yarn.nodemanager.log.retain-seconds</name>

<value>10800</value>

</property>

 

<property>

<name>Yarn.nodemanager.remote-app-log-dir</name>

<value>/logs</value>

</property>

 

<property>

<name>Yarn.nodemanager.remote-app-log-dir-suffix</name>

<value>logs</value>

</property>

 

<property>

<name>Yarn.log-aggregation.retain-seconds</name>

<value>-1</value>

</property>

 

<property>

<name>Yarn.log-aggregation.retain-check-interval-seconds</name>

<value>-1</value>

</property>

</configuration>

(5)slaves

[uhadoop@master hadoop]$ vi slaves

node1

node2

(6)修改JAVA_HOME: 分别在文件hadoop-env.sh和yarn-env.sh中添加JAVA_HOME配置

[uhadoop@master hadoop]$ vi hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_25

或export JAVA_HOME=${JAVA_HOME}

[uhadoop@master hadoop]$ vi yarn-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_25

或export JAVA_HOME=${JAVA_HOME}

3.1.4.          格式化文件系统

格式化文件系统:

[uhadoop@master ~]$ cd /home/uhadoop/uhadoop/hadoop-2.5.2/bin

[uhadoop@master bin]$ hdfs namenode -format

14/12/11 17:57:48 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = master/10.10.36.100

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 2.5.2

STARTUP_MSG:   classpath = /home/uhadoop/uhadoop/hadoop-2.5.2/etc/hadoop:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-digester-1.8.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/asm-3.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-el-1.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-lang-2.6.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-codec-1.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/hadoop-annotations-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/activation-1.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/hadoop-auth-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-net-3.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jersey-core-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jettison-1.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jsp-api-2.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/servlet-api-2.5.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jetty-6.1.26.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-io-2.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/guava-11.0.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/xz-1.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/paranamer-2.3.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/xmlenc-0.52.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-cli-1.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/log4j-1.2.17.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/avro-1.7.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jsch-0.1.42.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jersey-json-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/junit-4.11.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/lib/jersey-server-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/hadoop-nfs-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2-tests.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/asm-3.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2-tests.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/asm-3.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/activation-1.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jettison-1.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/guice-3.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/xz-1.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/javax.inject-1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-api-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-client-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-common-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-common-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2-tests.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.2.jar:/home/uhadoop/uhadoop/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.2.jar:/contrib/capacity-scheduler/*.jar

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0; compiled by 'jenkins' on 2014-11-14T23:45Z

STARTUP_MSG:   java = 1.8.0_25

************************************************************/

14/12/11 17:57:48 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]

14/12/11 17:57:48 INFO namenode.NameNode: createNameNode [-format]

Java HotSpot(TM) Server VM warning: You have loaded library /home/uhadoop/uhadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.

It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.

14/12/11 17:57:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Formatting using clusterid: CID-3bc7e084-b3ae-4cd1-8753-4640d03e4420

14/12/11 17:57:50 INFO namenode.FSNamesystem: fsLock is fair:true

14/12/11 17:57:50 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000

14/12/11 17:57:50 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true

14/12/11 17:57:50 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000

14/12/11 17:57:50 INFO blockmanagement.BlockManager: The block deletion will start around 2014 Dec 11 17:57:50

14/12/11 17:57:50 INFO util.GSet: Computing capacity for map BlocksMap

14/12/11 17:57:50 INFO util.GSet: VM type       = 32-bit

14/12/11 17:57:50 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB

14/12/11 17:57:50 INFO util.GSet: capacity      = 2^22 = 4194304 entries

14/12/11 17:57:50 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false

14/12/11 17:57:50 INFO blockmanagement.BlockManager: defaultReplication         = 3

14/12/11 17:57:50 INFO blockmanagement.BlockManager: maxReplication             = 512

14/12/11 17:57:50 INFO blockmanagement.BlockManager: minReplication             = 1

14/12/11 17:57:50 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2

14/12/11 17:57:50 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false

14/12/11 17:57:50 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000

14/12/11 17:57:50 INFO blockmanagement.BlockManager: encryptDataTransfer        = false

14/12/11 17:57:50 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000

14/12/11 17:57:50 INFO namenode.FSNamesystem: fsOwner             = uhadoop (auth:SIMPLE)

14/12/11 17:57:50 INFO namenode.FSNamesystem: supergroup          = supergroup

14/12/11 17:57:50 INFO namenode.FSNamesystem: isPermissionEnabled = true

14/12/11 17:57:50 INFO namenode.FSNamesystem: HA Enabled: false

14/12/11 17:57:50 INFO namenode.FSNamesystem: Append Enabled: true

14/12/11 17:57:51 INFO util.GSet: Computing capacity for map INodeMap

14/12/11 17:57:51 INFO util.GSet: VM type       = 32-bit

14/12/11 17:57:51 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB

14/12/11 17:57:51 INFO util.GSet: capacity      = 2^21 = 2097152 entries

14/12/11 17:57:51 INFO namenode.NameNode: Caching file names occuring more than 10 times

14/12/11 17:57:51 INFO util.GSet: Computing capacity for map cachedBlocks

14/12/11 17:57:51 INFO util.GSet: VM type       = 32-bit

14/12/11 17:57:51 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB

14/12/11 17:57:51 INFO util.GSet: capacity      = 2^19 = 524288 entries

14/12/11 17:57:51 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033

14/12/11 17:57:51 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0

14/12/11 17:57:51 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000

14/12/11 17:57:51 INFO namenode.FSNamesystem: Retry cache on namenode is enabled

14/12/11 17:57:51 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis

14/12/11 17:57:51 INFO util.GSet: Computing capacity for map NameNodeRetryCache

14/12/11 17:57:51 INFO util.GSet: VM type       = 32-bit

14/12/11 17:57:51 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB

14/12/11 17:57:51 INFO util.GSet: capacity      = 2^16 = 65536 entries

14/12/11 17:57:51 INFO namenode.NNConf: ACLs enabled? false

14/12/11 17:57:51 INFO namenode.NNConf: XAttrs enabled? true

14/12/11 17:57:51 INFO namenode.NNConf: Maximum size of an xattr: 16384

14/12/11 17:57:51 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1623836625-10.10.36.100-1418349471725

14/12/11 17:57:52 INFO common.Storage: Storage directory /tmp/hadoop-uhadoop/dfs/name has been successfully formatted.

14/12/11 17:57:52 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

14/12/11 17:57:52 INFO util.ExitUtil: Exiting with status 0

14/12/11 17:57:52 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at master/10.10.36.100

************************************************************/

3.1.5.          启动或停止服务

1.启动dfs、yarn

[uhadoop@master sbin]$ cd /home/uhadoop/uhadoop/hadoop-2.5.2/sbin

[uhadoop@master sbin]$ ./start-dfs.sh

Java HotSpot(TM) Server VM warning: You have loaded library /home/uhadoop/uhadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.

It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.

14/12/11 01:40:26 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting namenodes on [master]

master: starting namenode, logging to /home/uhadoop/uhadoop/hadoop-2.5.2/logs/hadoop-uhadoop-namenode-master.out

node2: starting datanode, logging to /home/uhadoop/uhadoop/hadoop-2.5.2/logs/hadoop-uhadoop-datanode-node2.out

node1: datanode running as process 3197. Stop it first.

Starting secondary namenodes [0.0.0.0]

The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.

RSA key fingerprint is e2:46:50:9a:3a:19:0a:2f:4e:ef:b2:68:2b:91:f5:40.

Are you sure you want to continue connecting (yes/no)? yes

0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.

0.0.0.0: starting secondarynamenode, logging to /home/uhadoop/uhadoop/hadoop-2.5.2/logs/hadoop-uhadoop-secondarynamenode-master.out

Java HotSpot(TM) Server VM warning: You have loaded library /home/uhadoop/uhadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.

It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.

14/12/11 01:40:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[uhadoop@master sbin]$ ./start-yarn.sh

starting yarn daemons

starting resourcemanager, logging to /home/uhadoop/uhadoop/hadoop-2.5.2/logs/yarn-uhadoop-resourcemanager-master.out

node1: starting nodemanager, logging to /home/uhadoop/uhadoop/hadoop-2.5.2/logs/yarn-uhadoop-nodemanager-node1.out

node2: starting nodemanager, logging to /home/uhadoop/uhadoop/hadoop-2.5.2/logs/yarn-uhadoop-nodemanager-node2.out

[uhadoop@master sbin]$ ./start-all.sh

[uhadoop@master sbin]$ ./stop-all.sh

[uhadoop@master sbin]$ ./stop-dfs.sh

[uhadoop@master sbin]$ ./stop-yarn.sh

3.1.6.          查看Hadoop信息

1.查看安全模式的状态

[uhadoop@master bin]$ hadoop dfsadmin -safemode get

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

Java HotSpot(TM) Server VM warning: You have loaded library /home/uhadoop/uhadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.

It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.

14/12/11 19:11:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Safe mode is OFF

[uhadoop@master bin]$ hadoop dfsadmin -safemode leave

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

Java HotSpot(TM) Server VM warning: You have loaded library /home/uhadoop/uhadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.

It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.

14/12/11 19:13:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Safe mode is OFF

2.查询Hadoop信息:

[uhadoop@master bin]$ $ pwd

/home/uhadoop/uhadoop/hadoop-2.5.2/bin

[uhadoop@master bin]$ hadoop dfsadmin -report

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

Java HotSpot(TM) Server VM warning: You have loaded library /home/uhadoop/uhadoop/hadoop-2.5.2/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.

It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.

14/12/11 19:05:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Configured Capacity: 24809701376 (23.11 GB)

Present Capacity: 17320230912 (16.13 GB)

DFS Remaining: 17320181760 (16.13 GB)

DFS Used: 49152 (48 KB)

DFS Used%: 0.00%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

-------------------------------------------------

Live datanodes (2):

Name: 10.10.36.123:50010 (node2)

Hostname: node2

Decommission Status : Normal

Configured Capacity: 12404850688 (11.55 GB)

DFS Used: 24576 (24 KB)

Non DFS Used: 3741917184 (3.48 GB)

DFS Remaining: 8662908928 (8.07 GB)

DFS Used%: 0.00%

DFS Remaining%: 69.83%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Xceivers: 1

Last contact: Thu Dec 11 19:05:40 PST 2014

Name: 10.10.36.101:50010 (node1)

Hostname: node1

Decommission Status : Normal

Configured Capacity: 12404850688 (11.55 GB)

DFS Used: 24576 (24 KB)

Non DFS Used: 3747553280 (3.49 GB)

DFS Remaining: 8657272832 (8.06 GB)

DFS Used%: 0.00%

DFS Remaining%: 69.79%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Xceivers: 1

Last contact: Thu Dec 11 19:05:39 PST 2014

4.          Hadoop-HDFS 测试

5.          参考资料

5.1.       参考资料

参考资料:

Ubuntu14.04下安装Hadoop2.4.0 (单机模式)
http://www.cnblogs.com/kinglau/p/3794433.html

Ubuntu14.04下安装Hadoop2.4.0 (伪分布模式)
http://www.cnblogs.com/kinglau/p/3796164.html

伪分布模式下执行wordcount实例时报错解决办法
http://www.cnblogs.com/kinglau/p/3364928.html

Eclipse下搭建Hadoop2.4.0开发环境
http://www.cnblogs.com/kinglau/p/3802705.html

Hadoop学习三十:Win7 Eclipse调试Centos Hadoop2.2-Mapreduce
http://zy19982004.iteye.com/blog/2024467

hadoop2.5.0 centOS系列 分布式的安装 部署
http://my.oschina.net/yilian/blog/310189

Centos6.5源码编译安装Hadoop2.5.1
http://www.myhack58.com/Article/sort099/sort0102/2014/54025.htm

Hadoop MapReduce两种常见的容错场景分析
http://www.chinacloud.cn/show.aspx?id=15793&cid=17

hadoop 2.2.0集群安装
http://blog.csdn.net/bluishglc/article/details/24591185

Apache Hadoop 2.2.0 HDFS HA + YARN多机部署
http://blog.csdn.net/u010967382/article/details/20380387

Hadoop集群配置(最全面总结)
http://blog.csdn.net/hguisu/article/details/7237395

Hadoop hdfs-site.xml 配置项清单
http://he.iori.blog.163.com/blog/static/6955953520138107638208/
http://slaytanic.blog.51cto.com/2057708/1101111

Hadoop三种安装模式
http://blog.csdn.net/liumm0000/article/details/13408855

5.2.       常用命令

常用命令:

hadoop dfs -ls 列出HDFS下的文件
hadoop dfs -ls in 列出HDFS下某个文档中的文件
hadoop dfs -put test1.txt test 上传文件到指定目录并且重新命名,只有所有的DataNode都接收完数据才算成功
hadoop dfs -get in getin 从HDFS获取文件并且重新命名为getin,同put一样可操作文件也可操作目录
hadoop dfs -rmr out 删除指定文件从HDFS上
hadoop dfs -cat in/* 查看HDFS上in目录的内容
hadoop dfsadmin -report 查看HDFS的基本统计信息,结果如下
hadoop dfsadmin -safemode leave 退出安全模式
hadoop dfsadmin -safemode enter 进入安全模式

start-balancer.sh 负载均衡

最新文章

  1. freemarker配置信息
  2. Office 2013 Pro Plus Vol激活
  3. Mongodb 资源
  4. OD附加功能分析
  5. 简单方法判断JavaScript对象为null或者属性为空
  6. Mysql函数集合
  7. NodeManager起不来
  8. NSMutableAttributedString(富文本)的简单使用
  9. 《编程珠玑》第二章 aha算法
  10. 在网页中使用javascript提供反馈信息
  11. 判断数据是否服从某一分布(二)——简单易用fitdistrplus包
  12. VR全景智慧城市--2017年VR项目加盟将是一个机遇
  13. 01_MyBatis EHCache集成及所需jar包,ehcache.xml配置文件参数配置及mapper中的参数配置
  14. C语言实现牛顿迭代法解方程
  15. java设计模式--观察者模式(Observer)
  16. F7+vue 物理返回键监听使用
  17. python3-基础3
  18. LOJ2255. 「SNOI2017」炸弹 (线段树)
  19. Eclipse关联Github
  20. Windows 消息【二】窗口函数

热门文章

  1. Fel表达式实践
  2. Insertion Sort List——链表的插入排序
  3. 操作cephfs的基本命令
  4. Django学生管理系统添加学生时,报错Not Found: /POST
  5. 六十三 、异步IO
  6. CentOS7安装和配置vsftpd
  7. SpringMVC一些功能
  8. Codeforces 1082 A. Vasya and Book-题意 (Educational Codeforces Round 55 (Rated for Div. 2))
  9. java 中的try catch在文件相关操作的使用
  10. 背包问题(dp基础)