文章目录
- 一、背景
- 二、分析过程
- 2.1 环境及测试数据
- 2.1.1 环境
- 2.1.2 测试数据
- 2.2 select语句异常分析
- 2.2.1 异常分析
- 2.2.2 捕获异常位置
- 2.3 insert overwrite语句异常分析
- 2.3.1 异常分析
- 2.3.2 捕获异常位置
- 2.3.2 读数据(readRow)异常
- 三、代码示例及结论
- 四、CDH集群中部署修改后的jar包
- 五、可能的其他方案?
一、背景
由于Hive的元数据与文件存储分离,且可单独修改表的类型,造成元数据与文件类型不同,这时使用SQL查询数据则会报错。不幸的是,我们就有这种需求,数采的数据同步了一份在Hive中,每天有大量的数据实时写入生成大量小文件;且对列的类型修改等没做限制,列类型可以被改成与之前不兼容的类型,以致于查询报错,通过insert overwrite来合并小文件的任务也一直失败,HDFS上小文件不断增多,严重影响查询效率。
当前使用的Hive版本为2.1.1-cdh6.3.0(CDH6.3.0),搜索了一下也没有什么配置可以直接让Hive忽略这种类型不同的错误,当前Hive版本也没有高版本似乎有的类型兼容的功能;先简单调试了一下发现Hive Hook功能似乎也拦截不到数据这一步。不得已尝试一下修改源码的方式,却走通了。
本文记录通过修改Hive(2.1.1-cdh6.3.0)源码的方式,处理Hive元数据与文件类型不同时,SQL查询失败的问题,将类型不兼容的字段查询结果设置为空值。
二、分析过程
2.1 环境及测试数据
2.1.1 环境
CDH6.3.0,Hive版本为2.1.1-cdh6.3.0,还是调试hiveserver2,调试方法参考之前的《Hive源码调试》文章。顺带一提,github上cloudera/hive已经搜不到了,可能不打算开源了,还好gitee上这位朋友保存了一份https://gitee.com/gabry/cloudera-hive,有需要的可以自己保存一下这个仓库。
2.1.2 测试数据
创建一个表t1(我们默认用的parquet格式,本文也只测试过parquet格式数据;分区表也可以,但这里只举一个非分区表例子),插入两条数据;再创建一个列名相同,但id列类型不同的表error_type:
create table t1(id float,content string) stored as parquet;
insert into t1 vlaues(1.1,'content1'),(2.2,'content2');
create table error_type(id int,content string) stored as parquet;
在HDFS上直接将t1的数据文件拷到error_type表的目录下:
hdfs dfs -cp /user/hive/warehouse/testdb.db/t1/000000_0 /user/hive/warehouse/testdb.db/error_type/
这时使用sql查询error_type表则会报错:
0: jdbc:hive2://localhost:10000> select * from error_type;
INFO : Compiling command(queryId=hive_20220306113526_62d5507c-8df1-478b-8f9f-4ea1b8601df9): select * from error_type
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:error_type.id, type:int, comment:null), FieldSchema(name:error_type.content, type:string, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20220306113526_62d5507c-8df1-478b-8f9f-4ea1b8601df9); Time taken: 0.13 seconds
INFO : Executing command(queryId=hive_20220306113526_62d5507c-8df1-478b-8f9f-4ea1b8601df9): select * from error_type
INFO : Completed executing command(queryId=hive_20220306113526_62d5507c-8df1-478b-8f9f-4ea1b8601df9); Time taken: 0.001 seconds
INFO : OK
Error: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassCastException: org.apache.hadoop.io.FloatWritable cannot be cast to org.apache.hadoop.io.IntWritable (state=,code=0)
2.2 select语句异常分析
2.2.1 异常分析
开始调试,上面的ClassCastException发生处的函数调用栈(从IDEA中复制)为:
getPrimitiveJavaObject:46, WritableIntObjectInspector (org.apache.hadoop.hive.serde2.objectinspector.primitive)
copyToStandardObject:412, ObjectInspectorUtils (org.apache.hadoop.hive.serde2.objectinspector)
toThriftPayload:170, SerDeUtils (org.apache.hadoop.hive.serde2)
convert:49, ThriftFormatter (org.apache.hadoop.hive.serde2.thrift)
process:94, ListSinkOperator (org.apache.hadoop.hive.ql.exec)
forward:882, Operator (org.apache.hadoop.hive.ql.exec)
process:95, SelectOperator (org.apache.hadoop.hive.ql.exec)
forward:882, Operator (org.apache.hadoop.hive.ql.exec)
process:130, TableScanOperator (org.apache.hadoop.hive.ql.exec)
pushRow:438, FetchOperator (org.apache.hadoop.hive.ql.exec)
pushRow:430, FetchOperator (org.apache.hadoop.hive.ql.exec)
fetch:146, FetchTask (org.apache.hadoop.hive.ql.exec)
getResults:2227, Driver (org.apache.hadoop.hive.ql)
getNextRowSet:491, SQLOperation (org.apache.hive.service.cli.operation)
getOperationNextRowSet:297, OperationManager (org.apache.hive.service.cli.operation)
fetchResults:869, HiveSessionImpl (org.apache.hive.service.cli.session)
invoke:-1, GeneratedMethodAccessor5 (sun.reflect)
invoke:43, DelegatingMethodAccessorImpl (sun.reflect)
invoke:498, Method (java.lang.reflect)
invoke:78, HiveSessionProxy (org.apache.hive.service.cli.session)
access$000:36, HiveSessionProxy (org.apache.hive.service.cli.session)
run:63, HiveSessionProxy$1 (org.apache.hive.service.cli.session)
doPrivileged:-1, AccessController (java.security)
doAs:422, Subject (javax.security.auth)
doAs:1962, UserGroupInformation (org.apache.hadoop.security)
invoke:59, HiveSessionProxy (org.apache.hive.service.cli.session)
fetchResults:-1, $Proxy39 (com.sun.proxy)
fetchResults:507, CLIService (org.apache.hive.service.cli)
FetchResults:708, ThriftCLIService (org.apache.hive.service.cli.thrift)
getResult:1717, TCLIService$Processor$FetchResults (org.apache.hive.service.rpc.thrift)
getResult:1702, TCLIService$Processor$FetchResults (org.apache.hive.service.rpc.thrift)
process:39, ProcessFunction (org.apache.thrift)
process:39, TBaseProcessor (org.apache.thrift)
process:56, TSetIpAddressProcessor (org.apache.hive.service.auth)
run:286, TThreadPoolServer$WorkerProcess (org.apache.thrift.server)
runWorker:1149, ThreadPoolExecutor (java.util.concurrent)
run:624, ThreadPoolExecutor$Worker (java.util.concurrent)
run:748, Thread (java.lang)
WritableIntObjectInspector.getPrimitiveJavaObject:46这个方法为:
@Override
public Object getPrimitiveJavaObject(Object o) {
return o == null ? null : Integer.valueOf(((IntWritable) o).get());
}
此时参数为:
this = {WritableIntObjectInspector@10276}
typeInfo = {PrimitiveTypeInfo@10277} "int"
o = {FloatWritable@10262} "1.1"
这里将从文件中读取的FloatWritable类型的对象,转换为根据表元数据int类型对应的IntWritable类型,出现ClassCastException。
对日志中看到的HiveException类的构造函数下断点,可知道抛出HiveException的位置为函数调用栈process:94, ListSinkOperator (org.apache.hadoop.hive.ql.exec)这一行对应的这个函数:
@Override
@SuppressWarnings("unchecked")
public void process(Object row, int tag) throws HiveException {
try {
res.add(fetcher.convert(row, inputObjInspectors[0]));
numRows++;
} catch (Exception e) {
throw new HiveException(e);
}
}
2.2.2 捕获异常位置
异常抛出后,被捕获并抛出HiveException之前的几个栈中函数:
getPrimitiveJavaObject:46, WritableIntObjectInspector (org.apache.hadoop.hive.serde2.objectinspector.primitive)
copyToStandardObject:412, ObjectInspectorUtils (org.apache.hadoop.hive.serde2.objectinspector)
toThriftPayload:170, SerDeUtils (org.apache.hadoop.hive.serde2)
convert:49, ThriftFormatter (org.apache.hadoop.hive.serde2.thrift)
getPrimitiveJavaObject:46, WritableIntObjectInspector
显然是特定类型的实现,不适合在这里捕获异常;copyToStandardObject:412, ObjectInspectorUtils
函数本身逻辑比较复杂;toThriftPayload:170, SerDeUtils
和convert:49, ThriftFormatter
都可以,convert:49, ThriftFormatter
刚好有个循环处理一行数据的每个字段,在这里处理看起来比较清晰,
@Override
public Object convert(Object row, ObjectInspector rowOI) throws Exception {
StructObjectInspector structOI = (StructObjectInspector) rowOI;
List<? extends StructField> fields = structOI.getAllStructFieldRefs();
Object[] converted = new Object[fields.size()];
for (int i = 0 ; i < converted.length; i++) {
StructField fieldRef = fields.get(i);
Object field = structOI.getStructFieldData(row, fieldRef);
converted[i] = field == null ? null :
SerDeUtils.toThriftPayload(field, fieldRef.getFieldObjectInspector(), protocol);
}
return converted;
}
将生成converted[i]的那行改为:
try {
converted[i] = field == null ? null :
SerDeUtils.toThriftPayload(field, fieldRef.getFieldObjectInspector(), protocol);
} catch (ClassCastException e) {
converted[i] = null;
}
这样修改后(部署见后面章节)执行select * from error_type不会抛异常了,查询的2条数据id字段都为null。
2.3 insert overwrite语句异常分析
2.3.1 异常分析
本以为就这样修改一下就可以了,尝试执行合并小文件的SQL:insert overwrite table error_type select * from error_type
还会报错,日志里打印的函数调用栈如下:
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.FloatWritable cannot be cast to org.apache.hadoop.io.IntWritable
at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:36)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$IntDataWriter.write(DataWritableWriter.java:385)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.write(DataWritableWriter.java:199)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$MessageDataWriter.write(DataWritableWriter.java:215)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:88)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:60)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:32)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:123)
at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:179)
at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:46)
at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:136)
at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:149)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:769)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:882)
at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:882)
at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:146)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:484)
2.3.2 捕获异常位置
经过亿点调试分析(这些SQL有MR任务,任务会提交到Yarn,先设置参数set hive.exec.mode.local.auto=true;
让Hive以本地模式运行该SQL,否则断点不会触发),接近抛异常位置(是否可以作为规律)的这个方法org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.GroupDataWriter#write也有与前面ThriftFormatter.convert:49
方法类似的通过循环写每一个字段的功能:
@Override
public void write(Object value) {
for (int i = 0; i < structFields.size(); i++) {
StructField field = structFields.get(i);
Object fieldValue = inspector.getStructFieldData(value, field);
if (fieldValue != null) {
String fieldName = field.getFieldName();
DataWriter writer = structWriters[i];
recordConsumer.startField(fieldName, i);
writer.write(fieldValue);
recordConsumer.endField(fieldName, i);
}
}
}
其中writer.write(fieldValue)
就是异常信息打印的调用栈中的org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter$GroupDataWriter.write(DataWritableWriter.java:199)
位置。
这里有一行Object fieldValue = inspector.getStructFieldData(value, field);
,经过调试可以发现,这行代码和前面捕获异常的方法convert:49, ThriftFormatter
中的Object field = structOI.getStructFieldData(row, fieldRef);
调用的都是org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector#getStructFieldData
:
@Override
@SuppressWarnings("unchecked")
public Object getStructFieldData(Object data, StructField fieldRef) {
if (data == null) {
return null;
}
// We support both List<Object> and Object[]
// so we have to do differently.
boolean isArray = data.getClass().isArray();
if (!isArray && !(data instanceof List)) {
if (!warned) {
LOG.warn("Invalid type for struct " + data.getClass());
LOG.warn("ignoring similar errors.");
warned = true;
}
return data;
}
int listSize = (isArray ? ((Object[]) data).length : ((List<Object>) data)
.size());
MyField f = (MyField) fieldRef;
if (fields.size() != listSize && !warned) {
// TODO: remove this
warned = true;
LOG.warn("Trying to access " + fields.size()
+ " fields inside a list of " + listSize + " elements: "
+ (isArray ? Arrays.asList((Object[]) data) : (List<Object>) data));
LOG.warn("ignoring similar errors.");
}
int fieldID = f.getFieldID();
if (fieldID >= listSize) {
return null;
} else if (isArray) {
return ((Object[]) data)[fieldID];
} else {
return ((List<Object>) data).get(fieldID);
}
}
StandardStructObjectInspector#getStructFieldData
方法一个参数为从文件从读取的一行数据,第二个参数为org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.MyField
的实例,其中有要取的字段的下标,这个方法大概功能就是根据下标从一行数据中取数,但是没做类型判断。MyField
中也有与表的元数据中字段类型对应的ObjectInspector,可以使用ObjectInspector来读取一下本次获取的字段数据,如果类型冲突则捕获ClassCastException,并让本方法返回空值,后续的读写流程本字段都是null,这样无论对于之前的select语句还是insert overwrite语句,都可以达到本文想要的效果。
由于我们定义的Hive表都是用的原始类型,所以调用objectInspector实现的org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector
接口中的方法getPrimitiveJavaObject
,通过多态来实现各种类型数据的读取,StandardStructObjectInspector#getStructFieldData
函数最后一个if else部分改为:
Object objValue;
if (fieldID >= listSize) {
return null;
} else if (isArray) {
objValue = ((Object[]) data)[fieldID];
} else {
objValue = ((List<Object>) data).get(fieldID);
}
ObjectInspector objectInspector = f.getFieldObjectInspector();
if (Category.PRIMITIVE.equals(objectInspector.getCategory())) {
try {
((PrimitiveObjectInspector) objectInspector).getPrimitiveJavaObject(objValue);
} catch (ClassCastException | UnsupportedOperationException e) {
/*
UnsupportedOperationException:
如Hive列类型为String,这里获取到的objectInspector为ParquetStringInspector的实例,
但org.apache.hadoop.hive.ql.io.parquet.serde.primitive.ParquetStringInspector.getPrimitiveJavaObject中,
参数不是那个方法中做了判断的那几种类型时就会抛UnsupportedOperationException
*/
objValue = null;
}
}
return objValue;
(修改了这部分后,对于原始类型,其实前面2.2.2节中的捕获异常可以删除)
2.3.2 读数据(readRow)异常
在有的表执行insert overwrite时,遇到了下面两个错误(其实是同一种):
Caused by: java.lang.UnsupportedOperationException: Cannot inspect org.apache.hadoop.io.LongWritable
at org.apache.hadoop.hive.ql.io.parquet.serde.primitive.ParquetStringInspector.getPrimitiveJavaObject(ParquetStringInspector.java:77)
at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getLong(PrimitiveObjectInspectorUtils.java:709)
at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorConverter$LongConverter.convert(PrimitiveObjectInspectorConverter.java:182)
at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters$StructConverter.convert(ObjectInspectorConverters.java:416)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.readRow(MapOperator.java:126)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.access$200(MapOperator.java:89)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:483)
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.FloatWritable cannot be cast to org.apache.hadoop.io.IntWritable
at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:36)
at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getDouble(PrimitiveObjectInspectorUtils.java:755)
at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getFloat(PrimitiveObjectInspectorUtils.java:796)
at org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorConverter$FloatConverter.convert(PrimitiveObjectInspectorConverter.java:211)
at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters$StructConverter.convert(ObjectInspectorConverters.java:416)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.readRow(MapOperator.java:126)
at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.access$200(MapOperator.java:89)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:483)
都是在MapOperator 483行调用MapOperatorMapOpCtx.forward中抛出的异常),只是两个日志中最后出错的数据类型不同。没分析代码,还是用与之前类似的方法,在org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters.StructConverter#convert
方法中也有个循环处理每个字段的功能,将异常日志中ObjectInspectorConverters.java:416
指出的这一行代码:
outputFieldValue = fieldConverters.get(f).convert(inputFieldValue);
改成:
try {
outputFieldValue = fieldConverters.get(f).convert(inputFieldValue);
} catch (ClassCastException | UnsupportedOperationException e) {
outputFieldValue = null;
}
修改到这一步后,我们遇到的那些元数据与与文件不兼容类型的表都能正常查询了,也可以通过insert overwrite合并小文件了。
三、代码示例及结论
修改的代码都提交到了fork的这个仓库中,可查看这个提交记录 https://gitee.com/Ox3E6/cloudera-hive/commit/1e31127162b3bb29716580692c2d1fe30543f057
目前还不熟悉Hive代码迷宫中的细节和一些整体流程,仅仅根据报错位置尝试添加一些处理功能,如2.3.1节和2.3.3节都是在实际数据处理过程中抛出的异常。还是需要多做测试,遇到一个问题处理一个,只要转换成null后Hive的读写及其他逻辑不报错就行。
四、CDH集群中部署修改后的jar包
lib目录下hive开头的jar包全部拷过去。
CDH机器上,搜索hive jar包位置以hive-serde jar包为例:
[root@dev-master2 ~]# find / -name hive-serde-2.1.1-cdh6.3.0.jar
/opt/cloudera/cm/cloudera-navigator-server/libs/cdh6/hive-serde-2.1.1-cdh6.3.0.jar
/opt/cloudera/cm/cloudera-scm-telepub/libs/cdh6/hive-serde-2.1.1-cdh6.3.0.jar
/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/jars/hive-serde-2.1.1-cdh6.3.0.jar
/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hive/lib/hive-serde-2.1.1-cdh6.3.0.jar
/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/oozie/embedded-oozie-server/webapp/WEB-INF/lib/hive-serde-2.1.1-cdh6.3.0.jar
有的是符号链接,只需要放入这3个目录:
/opt/cloudera/cm/common_jars/
/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/jars/
/opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/oozie/embedded-oozie-server/webapp/WEB-INF/lib/
后两个目录下jar包文件名直接就是打包出来的jar包名,但是common_jars目录下的jar包文件名中有一串不知道什么算法生成的hash,形如:
[root@dev-master2 scripts]# ls -lh /opt/cloudera/cm/common_jars/hive*
-rw-r--r--. 1 root root 46K Jul 19 2019 /opt/cloudera/cm/common_jars/hive-ant-2.1.1-cdh6.3.0.f857dabb5222c1969c9f4087c8bfaac3.jar
-rw-r--r--. 1 root root 12K Jul 19 2019 /opt/cloudera/cm/common_jars/hive-classification-2.1.1-cdh6.3.0.c2ac9c5cf1fbb22aeda542f287ecbaa4.jar
-rw-r--r--. 1 root root 46K Jul 19 2019 /opt/cloudera/cm/common_jars/hive-cli-2.1.1-cdh6.3.0.f8741782bcbf8b4b58f537da6346e0ff.jar
-rw-r--r--. 1 root root 324K Jul 19 2019 /opt/cloudera/cm/common_jars/hive-common-1.1.0-cdh5.12.0.10beb989e3d6a390afce045b1e865bde.jar
-rw-r--r--. 1 root root 429K Jul 19 2019 /opt/cloudera/cm/common_jars/hive-common-2.1.1-cdh6.3.0.87dadce3138dc2c5c2e696cc6f6f7927.jar
......
以前替换yarn一个有并发修改问题的jar包也遇到这种情况,但是jar包替换后使用原来一样的带hash的文件名,也没有报校验失败的错误。
所以可将以下脚本与所有打包后lib目录下的hive*.jar放在一个目录下,将目录拷到CDH上每一台(通过一些脚本)机器上,并在每一台(通过一些脚本)机器上运行该脚本,替换hive的jar包,然后重启Hive即可。
copy_jars.sh(带hash的那部分可从机器上拷出来,再通过正则替换生成)
#!/usr/bin/env bash
# 参考/user/bin/hive脚本:Reference: http://stackoverflow.com/questions/59895/can-a-bash-script-tell-what-directory-its-stored-in
current_dir=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
echo "current_dir: $current_dir"
cp $current_dir/hive*.jar /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/jars/
cp $current_dir/hive*.jar /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/oozie/embedded-oozie-server/webapp/WEB-INF/lib/
cp $current_dir/hive-ant-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-ant-2.1.1-cdh6.3.0.f857dabb5222c1969c9f4087c8bfaac3.jar
cp $current_dir/hive-classification-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-classification-2.1.1-cdh6.3.0.c2ac9c5cf1fbb22aeda542f287ecbaa4.jar
cp $current_dir/hive-cli-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-cli-2.1.1-cdh6.3.0.f8741782bcbf8b4b58f537da6346e0ff.jar
cp $current_dir/hive-common-1.1.0-cdh5.12.0.jar /opt/cloudera/cm/common_jars/hive-common-1.1.0-cdh5.12.0.10beb989e3d6a390afce045b1e865bde.jar
cp $current_dir/hive-common-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-common-2.1.1-cdh6.3.0.87dadce3138dc2c5c2e696cc6f6f7927.jar
cp $current_dir/hive-exec-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-exec-2.1.1-cdh6.3.0.15d37ff81bca70d35b904a6946abea49.jar
cp $current_dir/hive-jdbc-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-jdbc-2.1.1-cdh6.3.0.a9016068a26246ac47c4b2637db33adb.jar
cp $current_dir/hive-llap-client-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-llap-client-2.1.1-cdh6.3.0.701f1dfc66958f0d8feab78d602b9cb6.jar
cp $current_dir/hive-llap-common-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-llap-common-2.1.1-cdh6.3.0.6c733dcdfa1e52ce79dc1b0066220a00.jar
cp $current_dir/hive-llap-server-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-llap-server-2.1.1-cdh6.3.0.105d9633082dfb213b9d390dc3df8087.jar
cp $current_dir/hive-llap-tez-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-llap-tez-2.1.1-cdh6.3.0.47ac2463acf7de1929b57c4da5ac7f41.jar
cp $current_dir/hive-metastore-1.1.0-cdh5.12.0.jar /opt/cloudera/cm/common_jars/hive-metastore-1.1.0-cdh5.12.0.f439e1b26177542bfc57e428717a265a.jar
cp $current_dir/hive-metastore-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-metastore-2.1.1-cdh6.3.0.4a407e44f9168f014f41edd4a56d5028.jar
cp $current_dir/hive-orc-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-orc-2.1.1-cdh6.3.0.0d1f0cf02d1bdad572cca211654c64af.jar
cp $current_dir/hive-serde-1.1.0-cdh5.12.0.jar /opt/cloudera/cm/common_jars/hive-serde-1.1.0-cdh5.12.0.62c4570f4681c0698b9f5f5ab6baab4a.jar
cp $current_dir/hive-serde-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-serde-2.1.1-cdh6.3.0.bde9116deea651dbf085034565504351.jar
cp $current_dir/hive-service-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-service-2.1.1-cdh6.3.0.0c28a52a856414cb45d0b827bd7884e9.jar
cp $current_dir/hive-service-rpc-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-service-rpc-2.1.1-cdh6.3.0.fde12a48a558128e4d15bfb47f90cfb4.jar
cp $current_dir/hive-shims-1.1.0-cdh5.12.0.jar /opt/cloudera/cm/common_jars/hive-shims-1.1.0-cdh5.12.0.2698b9ffda7580409fc299d986f41ded.jar
cp $current_dir/hive-shims-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-shims-2.1.1-cdh6.3.0.a151f9e3d14dfeb5bb2b34e0b2ef8a28.jar
cp $current_dir/hive-storage-api-2.1.1-cdh6.3.0.jar /opt/cloudera/cm/common_jars/hive-storage-api-2.1.1-cdh6.3.0.fb98d759511d27287bcd20e48b40f961.jar
五、可能的其他方案?
- 如从Hive表的INPUTFORMAT切入
- 更加熟悉Hive流程后,看其他地方是否能更方便地处理或全局处理
- 有空看看Hive3的兼容怎么做的,“学习学习”