Java方式对Parquet文件进行文件生成和解析

  此处属于对Parquet文件测试(一)——使用Java方式生成Parqeut格式文件并直接入库的Hive中的补充,因为之前只是写了生成,并没有写如何解析,其次就是弄懂结构定义的问题。最终目的是生成正确的Parquet文件,使用Spark可以正常的读取文件内容(可参考Spark练习测试(二)——定义Parquet文件的字段结构)。

测试准备

  首先定义一个结构,到时候生成的Parquet文件会储存如下结构的内容:

import lombok.Data;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;

/**
 * 测试结构
 * 属性为表字段
 */
@Data
public class TestEntity {
    private int intValue;
    private long longValue;
    private double doubleValue;
    private String stringValue;
    private byte[] byteValue;
    private byte[] byteNone;
}

  生成Parquet文件的测试用例代码如下:

import lombok.RequiredArgsConstructor;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.column.ParquetProperties;
import org.apache.parquet.example.data.Group;
import org.apache.parquet.example.data.simple.SimpleGroupFactory;
import org.apache.parquet.hadoop.ParquetFileWriter;
import org.apache.parquet.hadoop.ParquetWriter;
import org.apache.parquet.hadoop.example.ExampleParquetWriter;
import org.apache.parquet.hadoop.metadata.CompressionCodecName;
import org.apache.parquet.io.api.Binary;
import org.apache.parquet.schema.MessageType;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.stereotype.Component;

import java.io.IOException;

@Component
@RequiredArgsConstructor
public class TestComponent {

    @Autowired
    @Qualifier("testMessageType")
    private MessageType testMessageType;

    private static final String javaDirPath = "C:\\Users\\Lenovo\\Desktop\\对比\\java\\";

    /**
     * 文件写入parquet
     */
    public void javaWriteToParquet(TestEntity testEntity) throws IOException {
        String filePath = javaDirPath + System.currentTimeMillis() + ".parquet";
        ParquetWriter<Group> parquetWriter = ExampleParquetWriter.builder(new Path(filePath))
                .withWriteMode(ParquetFileWriter.Mode.CREATE)
                .withWriterVersion(ParquetProperties.WriterVersion.PARQUET_1_0)
                .withCompressionCodec(CompressionCodecName.SNAPPY)
                .withType(testMessageType).build();
        //写入数据
        SimpleGroupFactory simpleGroupFactory = new SimpleGroupFactory(testMessageType);
        Group group = simpleGroupFactory.newGroup();
        group.add("intValue", testEntity.getIntValue());
        group.add("longValue", testEntity.getLongValue());
        group.add("doubleValue", testEntity.getDoubleValue());
        group.add("stringValue", testEntity.getStringValue());
        group.add("byteValue", Binary.fromConstantByteArray(testEntity.getByteValue()));
        group.add("byteNone", Binary.EMPTY);
        parquetWriter.write(group);
        parquetWriter.close();
    }
}

  ※在配置字段结构的时候会有个问题如何配置字段的重复性。因为代码都是粘过来了一开始让人十分困惑,这玩意有啥作用。先来看下它们的经典说明(百度来的还是很靠谱的):

方式

说明

REQUIRED

出现 1 次

OPTIONAL

出现 0 次或者 1 次

REPEATED

出现 0 次或者多次

  ※当然最让人困惑的就是什么TMD是1次、什么TMD是0次、什么TMD是多次。

定义结构配置字段属性使用REQUIRED

  此处还是使用SpringBootTest创建测试用例。接下来配置个Parquet的结构:

import org.apache.parquet.schema.LogicalTypeAnnotation;
import org.apache.parquet.schema.MessageType;
import org.apache.parquet.schema.Types;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

import static org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName.*;

@Configuration
public class TestConfiguration {

    @Bean("testMessageType")
    public MessageType testMessageType() {
        Types.MessageTypeBuilder messageTypeBuilder = Types.buildMessage();
        messageTypeBuilder.required(INT32).named("intValue");
        messageTypeBuilder.required(INT64).named("longValue");
        messageTypeBuilder.required(DOUBLE).named("doubleValue");
        messageTypeBuilder.required(BINARY).as(LogicalTypeAnnotation.stringType()).named("stringValue");
        messageTypeBuilder.required(BINARY).as(LogicalTypeAnnotation.bsonType()).named("byteValue");
        messageTypeBuilder.required(BINARY).as(LogicalTypeAnnotation.bsonType()).named("byteNone");
        return messageTypeBuilder.named("test");
    }

}

  接下来执行测试方法生成Parquet文件。

import com.lyan.parquet_convert.test.TestComponent;
import com.lyan.parquet_convert.test.TestEntity;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

import javax.annotation.Resource;
import java.io.IOException;
import java.nio.charset.StandardCharsets;

@RunWith(SpringRunner.class)
@SpringBootTest(classes = ParquetConvertApplication.class)
public class TestConvertTest {

    @Resource
    private TestComponent testComponent;

    @Test
    public void startTest() throws IOException {
        TestEntity testEntity = new TestEntity();
        testEntity.setIntValue(100);
        testEntity.setLongValue(200);
        testEntity.setDoubleValue(300);
        testEntity.setStringValue("测试");
        testEntity.setByteValue("不为空的值".getBytes(StandardCharsets.UTF_8));
        testComponent.javaWriteToParquet(testEntity);
    }

  生成Parquet文件的部分日志内容如下。

# Parquet文件结构信息
root
 |-- intValue: integer (nullable = true)
 |-- longValue: long (nullable = true)
 |-- doubleValue: double (nullable = true)
 |-- stringValue: string (nullable = true)
 |-- byteValue: binary (nullable = true)
 |-- byteNone: binary (nullable = true)
# Parquet文件内容信息
+--------+---------+-----------+-----------+--------------------+--------+
|intValue|longValue|doubleValue|stringValue|           byteValue|byteNone|
+--------+---------+-----------+-----------+--------------------+--------+
|     100|      200|      300.0|       测试|[E4 B8 8D E4 B8 B...|      []|
+--------+---------+-----------+-----------+--------------------+--------+

  这里有个疑问,比如byteNone字段就是个空值,怎么能让让展示的是null,当然TestEntity.setByteNone()。是肯定不行的。赋值只能在Group.add()的上入手。那好,干脆就不填这个值了,直接注释:

java parqu java parquet的写入和生成_java parqu


REQUIRED修饰的字段内容不能为空。

定义结构配置字段属性使用OPTIONAL

  有了之前的测试结果,这块就好测试了。接着上头继续修改,直接修改结果定义的内容。修改byteNone字段的定义为OPTIONAL

java parqu java parquet的写入和生成_parquet_02


  执行测试方法生成文件,部分日志内容如下:

# Parquet文件内容信息
+--------+---------+-----------+-----------+--------------------+--------+
|intValue|longValue|doubleValue|stringValue|           byteValue|byteNone|
+--------+---------+-----------+-----------+--------------------+--------+
|     100|      200|      300.0|       测试|[E4 B8 8D E4 B8 B...|    null|
+--------+---------+-----------+-----------+--------------------+--------+

  此处明显可以看到byteNone字段的内容由[ ]变为了null.。所以OPTIONAL修饰的字段内容就是可以为null,也就是0次或多次

定义结构配置字段属性使用REPEATED

REPEATED方式:

java parqu java parquet的写入和生成_java parqu_03


  生成文件查看文件结构信息和文件内容信息:

# Parquet文件结构信息
root
 |-- intValue: array (nullable = true)
 |    |-- element: integer (containsNull = true)
 |-- longValue: array (nullable = true)
 |    |-- element: long (containsNull = true)
 |-- doubleValue: array (nullable = true)
 |    |-- element: double (containsNull = true)
 |-- stringValue: string (nullable = true)
 |-- byteValue: binary (nullable = true)
 |-- byteNone: binary (nullable = true)
# Parquet文件内容信息
+--------+---------+-----------+-----------+--------------------+--------+
|intValue|longValue|doubleValue|stringValue|           byteValue|byteNone|
+--------+---------+-----------+-----------+--------------------+--------+
|   [100]|    [200]|    [300.0]|       测试|[E4 B8 8D E4 B8 B...|    null|
+--------+---------+-----------+-----------+--------------------+--------+

REPEATED就是数组也就是0次或多次。

解析Parquet文件内容

  解析Parquet文件十分简单代码不多:

import org.apache.hadoop.fs.Path;
import org.apache.parquet.example.data.Group;
import org.apache.parquet.hadoop.ParquetReader;
import org.apache.parquet.hadoop.example.GroupReadSupport;
public void javaReadParquet(String path) throws IOException {
        GroupReadSupport readSupport = new GroupReadSupport();
        ParquetReader.Builder<Group> reader = ParquetReader.builder(readSupport, new Path(path));
        ParquetReader<Group> build = reader.build();
        Group line;
        while ((line = build.read()) != null) {
            System.out.println(line.getInteger("intValue",0));
            System.out.println(line.getLong("longValue",0));
            System.out.println(line.getDouble("doubleValue",0));
            System.out.println(line.getString("stringValue",0));
            System.out.println(new String(line.getBinary("byteValue",0).getBytes()));
            System.out.println(new String(line.getBinary("byteNone",0).getBytes()));
        }
        build.close();
    }

  配置上之前的文件全路径,解析结果如下:

2021-05-25 14:54:21.721  INFO 4860 --- [           main] o.a.p.h.InternalParquetRecordReader      : RecordReader initialized will read a total of 1 records.
2021-05-25 14:54:21.722  INFO 4860 --- [           main] o.a.p.h.InternalParquetRecordReader      : at row 0. reading next block
2021-05-25 14:54:21.739  INFO 4860 --- [           main] org.apache.hadoop.io.compress.CodecPool  : Got brand-new decompressor [.snappy]
2021-05-25 14:54:21.743  INFO 4860 --- [           main] o.a.p.h.InternalParquetRecordReader      : block read in memory in 21 ms. row count = 1
100
200
300.0
测试
不为空的值