1、说明

#hiveserver2增加了权限控制,需要在hadoop的配置文件中配置

core-site.xml 增加以下内容:
    <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
    </property>

#然后要重启hadoop

2、启动hiveserver2

#后台启动,我这里对hive加了环境变量
[root@node1 ~]#nohup hiveserver2 >>/opt/hive-2.1.1/hiveserver2.log &

3、查看

[root@node1 ~]# netstat -ntlp |grep 10000
tcp        0      0 0.0.0.0:10000           0.0.0.0:*               LISTEN      5144/java

#10002端口是hiveserver2的一个web端口,可以用浏览器打开:ip:10002 [root@node1
~]# netstat -ntlp |grep 10002 tcp 0 0 0.0.0.0:10002 0.0.0.0:* LISTEN 5144/java

4、使用beeline连接

[root@node1 ~]# beeline 
Beeline version 2.1.1 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000/default;     #连接default库
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive-2.1.1/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://localhost:10000/default;
Enter username for jdbc:hive2://localhost:10000/default;:     #直接回车
Enter password for jdbc:hive2://localhost:10000/default;:     #直接回车
Connected to: Apache Hive (version 2.1.1)
Driver: Hive JDBC (version 2.1.1)
19/12/11 10:54:49 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000/default> show tables;
+-----------+--+
| tab_name  |
+-----------+--+
+-----------+--+
No rows selected (1.408 seconds)
0: jdbc:hive2://localhost:10000/default> !exit    #退出
Closing: 0: jdbc:hive2://localhost:10000/default;

5、可能的问题

有可能用beeline连接数据库时,会报错:AccessControlException: Permission denied: user=anonymous, access=EXECUTE, inode="/tmp/hive":root:supergroup:d-wx-w
类似于以上错误;可以对hdfs上的/tmp目录加权:
[root@node1 ~]# hdfs dfs -chmod -R 777 /tmp

我这里是自己的测试集群用的root用户,生产环境应该是普通用户,而且应该谨慎操作;


当执行sql语句时,如果报以下错误:
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. 
User: root is not allowed to impersonate anonymous (state=08S01,code=1)

可以先kill掉hiveserver2,在hive-site.xml加入以下内容:

<property>
     <name>hive.server2.authentication</name>
     <value>NONE</value>
</property>

<property>
   <name>dfs.permissions.enabled</name>
   <value>false</value>
</property>

<property>
     <name>hive.server2.enable.doAs</name>
     <value>FALSE</value>
</property>