搭建大数据处理集群(Hadoop,Spark,Hbase)
发布时间:2021-02-27 16:28:01  所属栏目:大数据  来源:网络整理 
            导读:搭建Hadoop集群 配置每台机器的 /etc/hosts保证每台机器之间可以互访。 120.94.158.190 master 120.94.158.191 secondMaster 1、创建hadoop用户 先创建hadoop组 sudo addgroup hadoop 然后再建一个hadoop用户加入到hadoop组,(前一个为组,后一个为用户) s
                
                
                
            | 
                         然后把配置好的整个Spark复制到其他节点,如secondMaster scp -r spark-1.6.0-bin-hadoop2.4/ secondMaster:/home/hadoop/ 配置Hbase集群1、配置hbase-site.xml <configuration>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>master,secondMaster</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/hadoop/zookeeper</value>
    <description>Property from ZooKeeper config zoo.cfg.
    The directory where the snapshot is stored.
    </description>
  </property>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:9000/hbase</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed ZooKeeper
      true: fully-distributed with unmanaged ZooKeeper Quorum (see hbase-env.sh)
    </description>
  </property>
</configuration> 
 2、配置hbase-env.sh中的JAVA_HOME和HBASE_HEAPSIZE  # export JAVA_HOME=/usr/java/jdk1.6.0/ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 # export HBASE_HEAPSIZE=1G export HBASE_HEAPSIZE=4G 3、配置regionservers文件  secondMaster 4、创建zookeeper目录 su hadoop cd mkdir zookeeper (编辑:52站长网) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!  | 
                  

