整合phoenix4.6.0-HBase-1.0到cdh5..4.7 编译phoenix4.6源码 RegionServer 宕机

纵然是瞬间 提交于 2020-01-10 15:54:05

Phoenix 集成HBase

Phoenix 版本:phoenix-4.6.0-HBase-1.0

源码下载地址:

http://apache.cs.uu.nl/phoenix/phoenix-4.6.0-HBase-1.0/src/phoenix-4.6.0-HBase-1.0-src.tar.gz

应用下载地址:

http://apache.cs.uu.nl/phoenix/phoenix-4.6.0-HBase-1.0/bin/phoenix-4.6.0-HBase-1.0-bin.tar.gz

HBase        版本:1.0.0-cdh5.4.7

JDK              版本:1.7.0_45

Phoenix 编译

1、下载源码:

下载源码解压后需要修改文件有pom.xml、LocalIndexMerger.java、IndexSplitTransaction.java

1.1、Pom.xml位于phoenix-4.6.0-HBase-1.0-src 下

1.2、 LocalIndexMerger.java位于phoenix-4.5.0-HBase-1.0-src/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver

1.3、IndexSplitTransaction.java位于

phoenix-4.5_Orig/phoenix-4.5.0-HBase-1.0-src/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver

 

修改源码

pom.xml

1、将源码包源修改为cloudera 具体如下:

  <id>cloudera</id>      <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>

    </repository>

效果如下图

 

 

2、 修改HBase版本和cdh版本信息

将Hadoop依赖的版本修改为cloudera,HBase 也修改为cdh版本具体如下:

<!-- Hadoop Versions -->

    <hbase.version>1.0.0-cdh5.4.7</hbase.version>

    <hadoop-two.version>2.6.0-cdh5.4.7</hadoop-two.version>

    <!-- Dependency versions -->

    <commons-cli.version>1.2</commons-cli.version>

<hadoop.version>2.6.0-cdh5.4.7</hadoop.version>

 <flume.version>1.5.0-cdh5.4.7</flume.version>

效果如下图:

 

修改IndexSplitTransaction.java

需要修改的地方位于84行

原来: rss.getServerName(), metaEntries);

修改后:rss.getServerName(), metaEntries, 1);

如下效果图:

修改IndexSplitTransaction.java

需要修改的地方位于291行

修改前:

daughterRegions.getSecond().getRegionInfo(), server.getServerName());

修改后:

daughterRegions.getSecond().getRegionInfo(), server.getServerName(), 1);

效果图如下:

 

重新编译

运行mvn clean install –DskipTests 即可

 

 新jar 包

新jar 包在phoenix-assembly 下可以找到

 

Core 包位于phoenix-core 下

 

 

Phoenix 和hbase集成

官网安装步骤:http://phoenix.apache.org/installation.html#SQL_Client

To install a pre-built phoenix, use these directions:

  • Download and expand the latest phoenix-[version]-bin.tar.
  • Add the phoenix-[version]-server.jar to the classpath of all HBase region server and master and remove any previous version. An easy way to do this is to copy it into the HBase lib directory (use phoenix-core-[version].jar for Phoenix 3.x)
  • Restart HBase.
  • Add the phoenix-[version]-client.jar to the classpath of any Phoenix client.

1、在HBase服务端下载phoenix 并解压,将上诉7个jar拷贝进来即可。

2、将新编译后的phoenix-4.6.0-HBase-1.0-server.jar拷贝到每一个RegionServer下 /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase/lib

3、在服务端配置环境变量

export HBASE_HOME=opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/hbase
export CLASSPATH=.:$HBASE_HOME/lib/phoenix-4.6.0-HBase-1.0-server.jar:HBASE_HOME/lib/phoenix-4.6.0-HBase-1.0-client.jar

export PATH=$PATH:$JAVA_HOME/bin:$HBASE_HOME

4、重启RegionServer 服务

RegionServer假死状态

测试

在服务端Phoenix home目录下的bin 运行

./sqlline.py node1:2181

异常如下:

 

错误日志提示:

 

2016-01-13 14:20:27,197 WARN org.apache.hadoop.hbase.io.util.HeapMemorySizeUtil: hbase.regionserver.global.memstore.upperLimit is deprecated by hbase.regionserver.global.memstore.size
2016-01-13 14:20:27,448 WARN com.cloudera.cmf.event.publish.EventStorePublisherWithRetry: Failed to publish event: SimpleEvent{attributes={ROLE_TYPE=[REGIONSERVER], CATEGORY=[LOG_MESSAGE], ROLE=[hbase-REGIONSERVER-a1c374abf13fe24d8982a45aa379f538], SEVERITY=[IMPORTANT], SERVICE=[hbase], HOST_IDS=[896038a6-2fe4-4e58-89ec-bae0f871ca0c], SERVICE_TYPE=[HBASE], LOG_LEVEL=[WARN], HOSTS=[node3], EVENTCODE=[EV_LOG_EVENT]}, content=hbase.regionserver.global.memstore.upperLimit is deprecated by hbase.regionserver.global.memstore.size, timestamp=1452666026857}
2016-01-13 14:20:27,722 INFO org.apache.hadoop.hbase.util.ServerCommandLine: env:CDH_FLUME_HOME=/opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/flume-ng
2016-01-13 14:20:27,723 INFO org.apache.hadoop.hbase.util.ServerCommandLine: env:JAVA_HOME=/usr/java/default

 

错误信息

1、  RegionSever直接挂掉

参考地址:http://stackoverflow.com/questions/31849454/using-phoenix-with-cloudera-hbase-installed-from-repo

在phoenix 上配置HBase支持Phoenix二级索引

配置文件:在每一个RegionServer的hbase-site.xml里加入如下属性

<property> 
  <name>hbase.regionserver.wal.codec</name> 
  <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value> 
</property>

<property> 
  <name>hbase.region.server.rpc.scheduler.factory.class</name>
  <value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value> 
  <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description> 
</property>

<property>
  <name>hbase.rpc.controllerfactory.class</name>
  <value>org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory</value>
  <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>
</property>

<property>
  <name>hbase.coprocessor.regionserver.classes</name>
  <value>org.apache.hadoop.hbase.regionserver.LocalIndexMerger</value> 
</property>

2、在每一个master的hbase-site.xml里加入如下属性

<property>
  <name>hbase.master.loadbalancer.class</name>                                     
  <value>org.apache.phoenix.hbase.index.balancer.IndexLoadBala ncer</value>
</property>

<property>
  <name>hbase.coprocessor.master.classes</name>
  <value>org.apache.phoenix.hbase.index.master.IndexMasterObserver</value>
</property>

 

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!