• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            posts - 200, comments - 8, trackbacks - 0, articles - 0

            HBASE安裝和簡單測試

            Posted on 2013-04-15 20:45 鑫龍 閱讀(323) 評論(0)  編輯 收藏 引用 所屬分類: HBASE
            轉自:http://blog.chinaunix.net/uid-451-id-3156060.html

            1. 修改HDFSS設置
            vi conf/hdfs-site.xml
            增加下面的設置,HBASE需要訪問大量的文件
            <property>
            <name>dfs.datanode.max.xcievers</name>
            <value>4096</value>
            </property>


            2. 設置NTP同步
            rpm -qa |grep ntp

            master用缺省配置
            slaves:
            vi /etc/ntp.conf
            server 192.168.110.127
            把缺省配置的server都去掉,改為master的地址

            chkconfig ntpd on
            service ntpd restart


            另外:最好使用同一時區
            ln -sf /usr/share/zoneinfo/posix/Asia/Shanghai /etc/localtime 

            3. 修改nofile和nproc設置
            HBase需要使用很多文件,每次flush都是寫一個新文件,缺省1024遠遠不夠

            vi /etc/security/limits.conf
            hadoop - nofile 32768
            hadoop - nproc 32768

            重新登錄hadoop,驗證一下
            ulimit -a



            4.下載和安裝
            到http://hbase.apache.org去下載最新的穩定版本
            tar zxf hbase-0.92.1.tar.gz



            5. 設置環境變量 
            export HBASE_HOME=$HOME/hbase-0.92.1
            export HBASE_CONF_DIR=$HOME/hbase-conf

            同時設置添加到PATH和CLASSPATH


            6. 配置
            cp -r $HBASE_HOME/conf $HOME/hbase-conf
            vi hbase-env.sh

            export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre
            export HBASE_HEAPSIZE=300
            export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
            export HBASE_LOG_DIR=${HBASE_HOME}/logs
            export HBASE_MANAGES_ZK=true

            vi hdfs-site.xml
            添加下面的配置,開啟durable sync特性,Hadoop 0.20.205以上的版本有的功能
            開啟這個功能十分重要,否則HBASE會丟失數據。(本人猜測,這是些Hlog的需要,需要隨時appdend redo log,而HDFS一般只能建新文件)
            <configuration>
            <property>
            <name>dfs.support.append</name>
            <value>true</value>
            </property>
            </configuration>

            在Hadoop -.20.205的elease notes上有這么一句:
            * This release includes a merge of append/hsynch/hflush capabilities from 0.20-append branch, to support HBase in secure mode.

            7. 設置Fully-distributed模式
            vi hdfs-site.xml
            設置hbase.rootdir和hbase.cluster.distributed=true
            <property>
            <name>hbase.rootdir</name>
            <value>hdfs://master:9000/hbase</value>
            </property>
            <property>
            <name>hbase.cluster.distributed</name>
            <value>true</value>
            </property>


            8. 設置RegionServers
            cat regionservers
            slave1
            slave2


            9. 配置ZooKeepers
            vi hbase-env.sh
            export HBASE_MANAGES_ZK=true

            vi hdfs-site.xml
            <property>
            <name>hbase.zookeeper.property.clientPort</name>
            <value>2222</value>
            </property>
            <property>
            <name>hbase.zookeeper.quorum</name>
            <value>slave1,slave2</value>
            </property>
            <property>
            <name>hbase.zookeeper.property.dataDir</name>
            <value>/home/hadoop/zookeeper</value>
            </property>

            10. 復制安裝配置到其他節點

            scp -r conf slave1:
            scp -r conf slave2:

            scp -r hbase-conf slave1:
            scp -r hbase-conf slave2:
            scp -r hbase-0.92.1 slave1:
            scp -r hbase-0.92.1 slave2:
            scp -r .bash_profile slave1:
            scp -r .bash_profile slave2:



            11. 重新登錄,重啟hadoop
            stop-all.sh
            start-all.sh
            jps

            12. 啟動HBASE
            start-hbase.sh

            驗證,

            用jps命令查看java進程
            Master上有
            11420 HMaster
            ZoomKeeper上有
            575 HQuorumPeer
            RegionServer上有
            686 HRegionServer


            13 簡單測試
            hbase shell
            hbase(main):006:0> create 'test','data'
            0 row(s) in 1.1190 seconds

            hbase(main):007:0> list
            TABLE
            test
            1 row(s) in 0.0270 seconds

            hbase(main):009:0> put 'test','1','data:1','xxxx'
            0 row(s) in 0.1220 seconds

            hbase(main):010:0> put 'test','1','data:1','xxxx'
            0 row(s) in 0.0120 seconds

            hbase(main):011:0> put 'test','1','data:1','xxxx'
            0 row(s) in 0.0120 seconds

            hbase(main):015:0* put 'test','2','data:2','yyy'
            0 row(s) in 0.0080 seconds

            hbase(main):016:0> put 'test','3','data:3','zzz'
            0 row(s) in 0.0070 seconds

            hbase(main):017:0>
            hbase(main):018:0*
            hbase(main):019:0* scan 'test'
            ROW COLUMN+CELL
            1 column=data:1, timestamp=1333160616029, value=xxxx
            2 column=data:2, timestamp=1333160650780, value=yyy
            3 column=data:3, timestamp=1333160664490, value=zzz
            3 row(s) in 0.0260 seconds

            hbase(main):020:0>



            14. 查看了HDFS上建立的文件
            ./hadoop dfs -lsr /hbase
            Warning: $HADOOP_HOME is deprecated.

            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-
            -rw-r--r-- 2 hadoop supergroup 551 2012-03-31 10:07 /hbase/-ROOT-/.tableinfo.0000000001
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-/.tmp
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-/70236052
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-/70236052/.oldlogs
            -rw-r--r-- 2 hadoop supergroup 411 2012-03-31 10:07 /hbase/-ROOT-/70236052/.oldlogs/hlog.1333159627476
            -rw-r--r-- 2 hadoop supergroup 109 2012-03-31 10:07 /hbase/-ROOT-/70236052/.regioninfo
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-/70236052/info
            -rw-r--r-- 2 hadoop supergroup 714 2012-03-31 10:07 /hbase/-ROOT-/70236052/info/bd225e173164476f88111f622f5a7839
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.META.
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.META./1028785192
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.META./1028785192/.oldlogs
            -rw-r--r-- 2 hadoop supergroup 124 2012-03-31 10:07 /hbase/.META./1028785192/.oldlogs/hlog.1333159627741
            -rw-r--r-- 2 hadoop supergroup 111 2012-03-31 10:07 /hbase/.META./1028785192/.regioninfo
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.META./1028785192/info
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs/slave1,60020,1333159627316
            -rw-r--r-- 3 hadoop supergroup 293 2012-03-31 10:07 /hbase/.logs/slave1,60020,1333159627316/slave1%2C60020%2C1333159627316.1333159637444
            -rw-r--r-- 3 hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs/slave1,60020,1333159627316/slave1%2C60020%2C1333159627316.1333159637904
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs/slave2,60020,1333159627438
            -rw-r--r-- 3 hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs/slave2,60020,1333159627438/slave2%2C60020%2C1333159627438.1333159638583
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:18 /hbase/.oldlogs
            -rw-r--r-- 2 hadoop supergroup 38 2012-03-31 10:07 /hbase/hbase.id
            -rw-r--r-- 2 hadoop supergroup 3 2012-03-31 10:07 /hbase/hbase.version
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test
            -rw-r--r-- 2 hadoop supergroup 513 2012-03-31 10:22 /hbase/test/.tableinfo.0000000001
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test/.tmp
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5/.oldlogs
            -rw-r--r-- 2 hadoop supergroup 124 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5/.oldlogs/hlog.1333160541983
            -rw-r--r-- 2 hadoop supergroup 219 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5/.regioninfo
            drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5/data


            Error
            ==========================================
            slave1: java.io.IOException: Could not find my address: db1 in list of ZooKeeper quorum servers
            slave1: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.writeMyID(HQuorumPeer.java:133)
            slave1: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:60)
            Reason:hostname是db1,但是我配置的名字是slave1,但是是同一個IP.HBase會用hostname取得的主機名來方向方向解析DNS
            Solution:
            修改hostname為slave1重新啟動server
            国内高清久久久久久| 久久91精品国产91久久户| 国产—久久香蕉国产线看观看| 国产精品久久久福利| 久久久久国产精品嫩草影院| 色综合久久中文字幕综合网| 无码AV波多野结衣久久| 久久夜色tv网站| 99精品国产免费久久久久久下载 | 99久久免费国产精精品| 久久成人精品| 亚洲精品白浆高清久久久久久 | 国产婷婷成人久久Av免费高清| 久久狠狠色狠狠色综合| 亚洲精品国精品久久99热| 精品久久无码中文字幕| 国内精品伊人久久久久av一坑| 免费一级欧美大片久久网| 久久99国产精品99久久| 婷婷综合久久狠狠色99h| 青草久久久国产线免观| 热re99久久6国产精品免费| 99久久亚洲综合精品网站| 久久毛片免费看一区二区三区| 91精品国产高清91久久久久久| 99久久综合国产精品免费| 亚洲国产精品无码久久久久久曰| 国产精品福利一区二区久久| 久久精品一区二区三区AV| 欧美久久久久久午夜精品| 久久777国产线看观看精品| 久久99精品国产麻豆| 一本久久a久久精品亚洲| 久久久精品日本一区二区三区 | 亚洲AV无码久久精品成人| 久久99精品久久久久久噜噜| 国产精品久久国产精品99盘| 亚洲AV无码久久精品色欲| 久久婷婷国产综合精品| 亚洲狠狠婷婷综合久久久久| 久久人人爽人人爽人人片AV高清|