命令行输入java命令

Functional style of programming is becoming more accepted in mainstream practical programming for good reasons. For one, functional programming offers a few tools to simplify complicated logic with more readable code.

出于充分的原因,编程的函数风格在主流实践编程中越来越被接受。 首先,函数式编程提供了一些工具,可通过更具可读性的代码来简化复杂的逻辑。

Today, many mainstream programming languages, including Java, have incorporated these functional programming constructs. We don’t have to switch to a pure functional programming language in order to embrace functional style programming and benefit from it.

如今,包括Java在内的许多主流编程语言都结合了这些功能性编程结构。 我们不必切换到纯函数式编程语言就可以接受函数式编程并从中受益。

In this article, we’ll use a well understood application to illustrate how functional programming can be applied to the data processing domain; We are going to look at how we can ingest a tab-delimited file into the database, with some data validations and error reporting, using functional programming constructs in Java.

在本文中,我们将使用一个易于理解的应用程序来说明如何将函数式编程应用于数据处理领域。 我们将研究如何使用Java中的函数式编程结构将制表符分隔的文件提取到数据库中,并进行一些数据验证和错误报告。

Full source code of examples in this article can be accessed here: https://gitlab.com/yongtze/functional-file-ingest

可以在此处访问本文示例的完整源代码: https : //gitlab.com/yongtze/functional-file-ingest

(File Ingestion — The Imperative Way)

To set the stage of what sort of code we are about to write, let’s look at the snippet of code below written in Java using the conventional imperative style.

为了设置我们将要编写哪种代码的阶段,让我们看看下面使用常规命令式风格用Java编写的代码片段。

private FileIngestResult ingestFile(final File file) {
    long lineNumber = 0;
    long totalRowsIngested = 0;
    long totalErrorRows = 0;
    List<ValidationError> allValidationErrors = new ArrayList<>();
    try (
            final BufferedReader reader = new BufferedReader(new FileReader(file));
            final Connection connection = ContactDb.createConnection();
            final PreparedStatement insertStmt = ContactDb.prepareInsertStatement(connection);
    ) {
        String line = null;
        while ((line = reader.readLine()) != null) {
            lineNumber++;
            if (lineNumber == 1) {
                // Ignore the header line.
                continue;
            }

            final String[] tuple = line.split("\\t");
            final Contact contact = ContactFile.mapRow(tuple);
            final List<ValidationError> validationErrors = ContactFile.validateContact(lineNumber, contact);

            if (validationErrors.isEmpty()) {
                final int insertCount = ContactDb.insertContact(insertStmt, contact);
                totalRowsIngested += insertCount;
            } else {
                totalErrorRows++;
                allValidationErrors.addAll(validationErrors);
            }
        }

        return new FileIngestResult(
                file,
                FileIngestResult.Status.OK,
                lineNumber-1,
                totalRowsIngested,
                totalErrorRows,
                null,
                allValidationErrors
        );
    } catch (Exception e) {
        return new FileIngestResult(
                file,
                FileIngestResult.Status.ERROR,
                lineNumber-1,
                totalRowsIngested,
                totalErrorRows,
                e,
                allValidationErrors
        );
    }
}

What the code is trying to achieve is read all lines from a tab-delimited file, convert each line into a Contact object, validate the Contact object, and insert into the contact table if validation passes. At the end of the code, we return a summary of the file ingest with total rows read from file, total rows ingested into the database, total rows with validation errors, all the validation errors, and any exception caught in the process.

代码试图实现的是从制表符分隔的文件中读取所有行,将每行转换为Contact对象,验证Contact对象,如果通过验证,则将其插入联系人表中。 在代码末尾,我们返回文件提取的摘要,其中包括从文件读取的总行数,提取到数据库中的总行数,带有验证错误的总行,所有验证错误以及过程中捕获的任何异常。

Now, since this is a relatively simple program, the code organization isn’t that bad. We can still pick up on what the code is doing pretty easily. However, the point of looking at the imperative code is so that we have a baseline to compare to the functional version of this same code.

现在,由于这是一个相对简单的程序,因此代码组织还不错。 我们仍然可以很轻松地了解代码在做什么。 但是,查看命令性代码的目的是使我们有一个基线可以与该相同代码的功能版本进行比较。

(Java Stream API)

Before we look at the first functional example, let’s take a quick recap of the Java Stream API in case you are new to it. For those who are familiar with the API, feel free to skip this section.

在看第一个功能示例之前,让我们快速回顾一下Java Stream API,以防您不熟悉它。 对于那些熟悉API的人,请随时跳过本节。

We can think of the Java Stream API as an API for processing a collection of data using a set of standard “operators”. The Java Stream API views the data as a stream of data that has a beginning and an end, with a chain of operators stringed together to form a sequence of data transformation steps. When the chain of operators is executed, each piece of data is passed through the chain of operators until the result is collected at the end of the chain.

我们可以将Java Stream API视为使用一组标准“运算符”来处理数据集合的API。 Java Stream API将数据视为具有开头和结尾的数据流,并且将一系列运算符串在一起以形成一系列数据转换步骤。 当执行运算符链时,每条数据都将通过运算符链传递,直到在该链的末尾收集结果。

To look at a more concrete example, let’s say we want to iterate over a List of String and print each String in the List to system out. Instead of coding this as a for loop, we use the forEach() operator.

为了看一个更具体的示例,假设我们要遍历String List ,然后将List的每个String打印出来以进行系统输出。 而不是将其编码为for循环,我们使用forEach()运算符。

List<String> names = Arrays.asList(“John”, “James”, “Adam”, “Mark”);
names.stream()
    .forEach(name -> {
        System.out.println(name);
    });

The forEach() operator takes a closure or anonymous function which is called for each item in the stream. Notice we don’t need a for loop like in the imperative example above. With the Stream API, the control of the iteration is done by the API, we just supply closures that handle each item in the stream.

forEach()运算符采用一个闭包或匿名函数,该函数针对流中的每个项目调用。 注意,我们不需要像上面命令式示例中的for循环。 使用Stream API,可以通过API完成对迭代的控制,我们只需提供处理流中每个项目的闭包即可。

Let’s expand our simple code above using another operator (map) to convert the names to uppercase before printing them.

让我们在上面使用其他运算符(映射)扩展我们的简单代码,以在打印之前将名称转换为大写。

List<String> names = Arrays.asList(“John”, “James”, “Adam”, “Mark”);
names.stream()
    .map(name -> name.toUpperCase())
    .forEach(upperCaseName -> {
        System.out.println(upperCaseName);
    });

The map() operator takes a closure for which it will call for each item in the stream. Whatever value is returned from the closure is passed downstream to the next operator. Map is a commonly used operator for transforming data, for example, in our case above, to convert names to uppercase. Since forEach() comes after map(), it will get the uppercase names, instead of the original ones from the list.

map()运算符采用一个闭包,它将为流中的每个项目调用该闭包。 从闭包返回的任何值都将被下游传递给下一个运算符。 Map是转换数据的常用运算符,例如,在我们上面的例子中,将名称转换为大写。 由于forEach()在map() ,因此它将获取大写名称,而不是列表中的原始名称。

What if, for whatever reason, we want to skip the first name in the stream? Easy, we just insert the skip() operator before map().

如果出于某种原因我们想跳过信息流中的名字怎么办? 很简单,我们只需在map()之前插入skip()运算符。

List<String> names = Arrays.asList(“John”, “James”, “Adam”, “Mark”);
names.stream()
    .skip(1)
    .map(name -> name.toUpperCase())
    .forEach(upperCaseName -> {
        System.out.println(upperCaseName);
    });

The integer 1 we passed to skip() indicates how many items from the beginning of the stream we want to skip over. We can skip any number of items, not just 1.

我们传递给skip()的整数1表示要从流的开头skip()多少个项目。 我们可以跳过任何数量的项目,而不仅仅是1。

What if we want to count the number of names, instead of printing them to system out? Turns out Java Stream API provides a few operators for aggregating data in the stream. For example, the count() operator will keep count of how many items are there in the stream at the point where the count() operator is inserted.

如果我们要计算名称数量而不是将其打印到系统中怎么办? 事实证明,Java Stream API提供了一些运算符来聚合流中的数据。 例如, count()运算符将在插入count()运算符的点保持对流中有多少项的count() 。

final List<String> names = Arrays.asList(“John”, “James”, “Adam”, “Mark”);
final int count = names.stream()
    .skip(1)
    .map(name -> name.toUpperCase())
    .count();
System.out.println(“Count = “ + count);

So, just like that, we can chain any number of operators required to transform the stream of data from the original value to a target value we want.

因此,就像这样,我们可以链接任意数量的运算符,以将数据流从原始值转换为所需的目标值。

(File Ingestion Using Java Stream API)

For our first functional file ingestion example, we’ll use the Java Stream API. Let’s build up the program one step at a time.

对于我们的第一个功能文件提取示例,我们将使用Java Stream API。 让我们一次构建一个程序。

First, we need to read all the lines in the given file as a Stream of String. The JDK Files class provides a convenience function (Files.lines()) for doing this.

首先,我们需要将给定文件中的所有行读取为String Stream 。 JDK Files类提供了一个方便的功能( Files.lines() )来执行此操作。

private Stream<Pair<Long, String>> indexedLine(final File file) throws IOException {
    final AtomicLong lineNumber = new AtomicLong(0L);
    return Files.lines(file.toPath())
        .map(line -> Pair.of(lineNumber.incrementAndGet(), line));
}

First thing you’ll notice when we use the Stream API is we are relinquishing control of the iteration to the API. In other words, you won’t see a while loop in our code any more. Instead, we tell Stream API how do we generate the stream, how we process each data item in the stream, how do we want to collect the results, and Stream API controls how the iteration is executed.

当您使用Stream API时,您会注意到的第一件事是我们将对迭代的控制权交给了API。 换句话说,您不会再在我们的代码中看到while循环。 相反,我们告诉Stream API如何生成流,如何处理流中的每个数据项,如何收集结果,以及Stream API控制迭代的执行方式。

An immediate consequence of this is now we need to find a different way to associate the correct line number with the line we just read from the file for error reporting purposes.

这样做的直接后果是,我们现在需要找到另一种方式来将正确的行号与我们刚从文件中读取的行相关联,以进行错误报告。

The way we solve it is by pairing an auto incremented Long integer with the line we just read from the file. And now we have a Pair of Long and String, which represents the line number and the line content.

解决问题的方法是将一个自动递增的Long整数与刚从文件中读取的行配对。 现在,我们有了一Pair Long和String ,它们代表行号和行内容。

This is a very common functional programming technique where instead of having temporary variables to keep track of “state”, we capture all relevant states using some data structure (in this case Pair) and emit that as the output so that subsequent steps have access to these states.

这是一种非常常见的函数式编程技术,其中我们没有使用临时变量来跟踪“状态”,而是使用某种数据结构(在此情况下为Pair )捕获所有相关状态,并将其作为输出发出,以便后续步骤可以访问这些状态。

#1: In functional programming, we emit all relevant states as output, instead of using shared variables.

#1:在函数式编程中,我们发出所有相关状态作为输出,而不是使用共享变量。

Next, we need to convert the line read from file into a Contact object. This is a simple mapping function, however, remember that we need to capture all relevant states as output? We will create a new class LineIngestResult that will have all the required properties for us to aggregate ingestion results at the end.

接下来,我们需要将从文件中读取的行转换为Contact对象。 这是一个简单的映射函数,但是,还记得我们需要捕获所有相关状态作为输出吗? 我们将创建一个新类LineIngestResult ,该类将具有所有必需的属性,以便我们在末尾汇总摄取结果。

public class LineIngestResult {
    private final long lineNumber;
    private final Contact contact;
    private final List<ValidationError> validationErrors;
    private final Boolean insertSucceed;
    private final Throwable exception;    // Constructors and getters omitted for brevity.
    ...}

It’s worth noting here that since functional programming encourages the use of immutable data structure, the Contact class will have all properties marked as “final”, with values initialized through constructors only. There are getters as well to allow read access to these properties.

这里值得注意的是,由于函数式编程鼓励使用不可变的数据结构,因此Contact类会将所有属性标记为“ final”,其值仅通过构造函数进行初始化。 也有吸气剂,以允许对这些属性的读取访问。

Why immutable data structures you may ask? Well, in short, immutable data structures eliminate possibilities that the properties will change after initialization, thereby making code easier to reason about. But how do we transform data then?

为什么您可能会问不可变的数据结构? 简而言之,不变的数据结构消除了初始化后属性会更改的可能性,从而使代码更易于推理。 但是我们该如何转换数据呢?

Well, we transform data by creating a new copy of the input object, with the relevant properties changed in the new instance. By adopting this strict convention, we don’t have to worry about unintended modification of object properties after creation in downstream code.

好了,我们通过创建输入对象的新副本来转换数据,并在新实例中更改了相关属性。 通过采用这种严格的约定,我们不必担心在下游代码中创建对象后意外修改对象属性。

#2: Prefer usage of immutable data structures over mutable ones. Transform the input object to output object by copying and mutating at initialization of output object.

#2:优先使用可变数据结构而不是可变数据结构。 通过在输出对象初始化时进行复制和变异,将输入对象转换为输出对象。

The following function “maps” a line of the file into a Contact object, and finally to a LineIngestResult object.

以下函数将文件的一行“映射”到Contact对象,最后LineIngestResult到LineIngestResult对象。

private LineIngestResult mapContact(final Pair<Long, String> indexedLine) {
    final String[] tuple = indexedLine.getSecond().split(“\\t”);
    final Contact contact = ContactFile.mapRow(tuple);
    return new LineIngestResult(indexedLine.getFirst(), contact);
}

Next step, we need to validate the Contact object. The following function shows how we take the LineIngestResult object from the previous step, and transform it into a new LineIngestResult, with the validationErrors property initialized with the validation result.

下一步,我们需要验证Contact对象。 以下函数显示了如何从上一步中获取LineIngestResult对象,并将其转换为新的LineIngestResult ,并使用验证结果初始化了validationErrors属性。

private LineIngestResult validateContact(final LineIngestResult lineResult) {
    final long lineNumber = lineResult.getLineNumber();
    final Contact contact = lineResult.getContact();
    return new LineIngestResult(lineNumber, contact, ContactFile.validateContact(lineNumber, contact));
}

Next, we need to insert the Contact object into our database, if it passes validation. Again, we are still using the same convention of taking in a LineIngestResult object, and produce a new one with the result of the insert execution. If validation did not pass, we simply return the previous LineIngestResult for aggregation downstream.

接下来,如果通过验证,则需要将Contact对象插入数据库。 同样,我们仍然使用接收LineIngestResult对象的相同约定,并使用插入执行的结果生成一个新的对象。 如果验证没有通过,我们只返回先前的LineIngestResult进行下游聚合。

private LineIngestResult insertContact(final PreparedStatement insertStmt, final LineIngestResult lineResult) {
    if (lineResult.hasValidationError()) {
        return lineResult;
    } else {
        try {
            ContactDb.insertContact(insertStmt, lineResult.getContact());
            return new LineIngestResult(
                lineResult.getLineNumber(),
                lineResult.getContact(),
                lineResult.getValidationErrors(),
                true,
                null);
        } catch (SQLException e) {
            throw new RuntimeException(e.getMessage(), e);
        }
    }
}

Now that we’ve got the individual steps coded, it’s time for the exciting part of glueing them together using the Java Stream API.

现在我们已经对各个步骤进行了编码,现在是时候使用Java Stream API将它们粘合在一起的激动人心的部分了。

private FileIngestResult ingestFile(final File file) {
    try (
        final Connection connection = ContactDb.createConnection();
        final PreparedStatement insertStmt = ContactDb.prepareInsertStatement(connection);
    ) {
        return indexedLine(file)                          // (1)
            .skip(1)                                      // (2)
            .map(indexedLine -> mapContact(indexedLine))  // (3)
            .map(contact -> validateContact(contact))     // (4)
            .map(lineResult ->
                insertContact(insertStmt, lineResult))    // (5)
            .reduce(new FileIngestResult(file),
                (FileIngestResult fileResult, LineIngestResult lineResult) ->
                    fileResult.accumulate(
                        lineResult.getValidationErrors(),
                        lineResult.isInsertSucceed(),
                        lineResult.getException()),
                (a, b) -> a);                             // (6)
    } catch (Exception e) {
        return new FileIngestResult(
            file, FileIngestResult.Status.ERROR, 0, 0, 0, e, Collections.emptyList());
    }
}

In a very quick overview of the code above, here’s what each of the steps is doing:

在上面代码的快速概述中,下面是每个步骤的操作:

Read each line from the specified file, paired with a line number.

Skip the first (header) line.

Map the line to a LineIngestResult containing a Contact object. 将行映射到包含Contact对象的LineIngestResult。

Validate the Contact object. 验证Contact对象。

Insert the Contact object if validation passes. 如果验证通过,则插入Contact对象。

Aggregate all LineIngestResult in the stream into a single FileIngestResult. 将流中的所有LineIngestResult聚合到单个FileIngestResult 。

When looking at code written in this style, it’s always helpful to be mindful of the input and output of each step or operator in the chain.

查看以这种风格编写的代码时,牢记链中每个步骤或运算符的输入和输出总是有帮助的。

命令行输入Java 没输出 java从命令行输入_API

Input/output data types of each step in the operator chain 操作员链中每个步骤的输入/输出数据类型

Note that step 3 to 5 takes a LineIngestResult object as input and generates a new LineIngestResult. Recall from the functions we looked at above, each of these steps fill in more and more properties of LineIngestResult as data progresses through the chain.

请注意,第3步到第5步将LineIngestResult对象作为输入并生成一个新的LineIngestResult 。 回想一下我们上面看过的函数,随着数据在链中的前进,这些步骤中的每一个都填充了LineIngestResult越来越多的属性。

Finally, the last reduce() operator is something we haven’t looked at prior to this point. The reduce() operator is another collector or terminal operator like count(). Instead of simply counting items, reduce() allows us to aggregate all LineIngestResult objects in the stream into a single FileIngestResult object representing the summary of the entire ingestion process.

最后,最后一点reduce()运算符是我们在此之前尚未讨论的内容。 reduce()运算符是另一个收集器或终端运算符,如count() 。 代替简单地计算项目, reduce()允许我们所有聚集LineIngestResult流中的对象到一个单一的FileIngestResult表示整个摄取过程的概要对象。

(Reduce() Operator)

Let’s dive a bit deeper into the reduce() step and see how it works. If you recall from our imperative code, we accumulate totalRows, totalRowsIngested, totalErrorRows, and validationErrors as we ingest each line to construct a single FileIngestResult at the end. We use temporary variables to hold on to the partial states throughout the iteration.

让我们更深入地进入reduce()步骤,看看它是如何工作的。 如果您从我们的命令性代码中回想起, totalErrorRows在我们提取每行以在最后构造单个FileIngestResult时,会累积totalRows , totalRowsIngested , totalErrorRows和validationErrors 。 在整个迭代过程中,我们使用临时变量来保留部分状态。

Following the convention #1 above of using functions that transform input to output, instead of temporary variables, we need a different way of aggregating results. Well, turns out, aggregation can be thought of as a series of function calls, with each call taking the current accumulated result and the current data item to produce the new accumulated result. This function is called the accumulator in Java Stream API’s reduce() operator.

遵循上面的约定#1,使用将输入转换为输出的函数(而不是临时变量),我们需要一种汇总结果的不同方法。 好吧,事实证明,聚合可以看作是一系列函数调用,每个调用都采用当前累加结果和当前数据项来产生新的累加结果。 此函数在Java Stream API的reduce()运算符中称为累加器。

The accumulator function in our example above is the second parameter in reduce() operator.

上面我们的示例中的累加器函数是reduce()运算符中的第二个参数。

(FileIngestResult fileResult, LineIngestResult lineResult) ->
    fileResult.accumulate(lineResult.getValidationErrors(), lineResult.isInsertSucceed(), lineResult.getException())

Most of the logic has been moved inside FileIngestResult.accumulate(). Let’s take a look.

大多数逻辑已移至FileIngestResult.accumulate() 。 让我们来看看。

public FileIngestResult accumulate(
        final List<ValidationError> validationErrors,
        final boolean isIngested,
        final Throwable exception) {    final boolean isError = (validationErrors != null && !validationErrors.isEmpty()) || exception != null;    final Status status = this.status == Status.ERROR || isError
        ? Status.ERROR
        : Status.OK;    final List<ValidationError> newValidationErrors = new LinkedList<>(this.validationErrors);
    if (validationErrors != null && !validationErrors.isEmpty()) {
        newValidationErrors.addAll(validationErrors);
    }    return new FileIngestResult(
        this.file,
        status,
        this.totalRowsRead + 1,
        isIngested ? this.totalRowsIngested + 1 : this.totalRowsIngested,
        isError ? this.totalErrorRows + 1 : this.totalErrorRows,
        this.exception == null ? exception : this.exception,
        newValidationErrors
    );
}

Remember, the accumulator function takes the current accumulated result (FileIngestResult), the current data item (LineIngestResult), and produces a new FileIngestResult representing the new accumulated result. FileIngestResult is simply a collection of all data points we want to aggregate, and each one is accumulated in this function.

请记住,累加器函数将获取当前的累积结果( FileIngestResult ),当前的数据项( LineIngestResult ),并生成一个代表新的累积结果的新FileIngestResult 。 FileIngestResult只是我们要聚合的所有数据点的集合,并且每个数据点都在此函数中累积。

  1. status — set to ERROR if either the current FileIngestResult is ERROR, or the current LineIngestResult has an error. Otherwise, it’s set to OK. status — status —如果当前FileIngestResult为ERROR或当前LineIngestResult有ERROR ,则设置为ERROR 。 否则,将其设置为OK。
  2. totalRowsRead — current totalRowsRead + 1 totalRowsRead —当前的totalRowsRead + 1
  3. totalRowsIngested — if ERROR then totalRowsIngested; otherwise totalRowsIngested + 1 totalRowsIngested —如果发生ERROR totalRowsIngested ;否则为totalRowsIngested 。 否则totalRowsIngested + 1
  4. totalErrorRows — if ERROR then totalErrorRows + 1; otherwise totalErrorRows totalErrorRows —如果有ERROR则totalErrorRows +1; 否则为totalErrorRows
  5. exception — the first non-null value of (current exception, exception from LineIngestResult) exception第一个非空值(当前异常, LineIngestResult异常)
  6. validationErrors — current validationErrors + validationErrors from LineIngestResult validationErrors -当前validationErrors + validationErrors从LineIngestResult

Now that we understand how the accumulator function works, let’s look at the reduce() operator again. More specifically here’s what each parameter of the reduce() operator mean.

现在我们了解了累加器函数的工作原理,让我们再次看一下reduce()运算符。 更具体地说,这是reduce()运算符的每个参数的含义。

reduce(new FileIngestResult(file),                          // (1)
    (FileIngestResult fileResult, LineIngestResult lineResult) ->
        fileResult.accumulate(lineResult.getValidationErrors(), lineResult.isInsertSucceed(), lineResult.getException()),   // (2)
    (a, b) -> a                                             // (3)
);
  1. Initial value of the aggregation.
  2. The accumulator function that’s called for each data item in the stream.
  3. The third parameter is not relevant in our example because it’s only required for parallel processing, and we are doing sequential processing. Hence, we are giving a dummy function that simply returns the first parameter.

(Composing Functions)

In essence, what Stream API allows us to do is to compose a sequence of smaller data transformation functions into a bigger construct that can take external input (a file in this example), perform any required transformations (convert to domain object and validate), perform the desired side effects (data ingested into the database) and produce the output expected (FileIngestResult).

本质上,Stream API允许我们做的是将一系列较小的数据转换函数组成一个较大的结构,该结构可以接受外部输入(在此示例中为文件),执行任何所需的转换(转换为域对象并验证),执行所需的副作用(将数据提取到数据库中)并产生预期的输出( FileIngestResult )。

#3: Compose a sequence of smaller data transformation functions together to form a more complicated construct to transform data.

#3:将一系列较小的数据转换函数组合在一起,以形成更复杂的结构来转换数据。

(Summary)

We looked at a few functional programming tools in our example to ingest data from a file into the database, using the Java Stream API. Java Stream API provides the Stream abstraction that is very useful in processing data. It allows us to combine many transformation steps in a chain of operators. Each step has a very defined purpose of transforming one data item into the next state.

我们在示例中研究了一些功能性编程工具,这些工具使用Java Stream API将文件中的数据提取到数据库中。 Java Stream API提供了Stream抽象,在处理数据时非常有用。 它使我们可以将许多转换步骤组合成一个运算符链。 每个步骤都有一个非常明确的目的,即将一个数据项转换为下一状态。

I believe this makes the code more organized and easier to read overall, compared to the imperative version. However, the usage of Java Stream API in our example here is not perfect, as it does not capture all the behaviors of the imperative version of the program.

我认为,与命令式版本相比,这使代码更加井井有条,更易于整体阅读。 但是,在我们的示例中,Java Stream API的用法并不完美,因为它不能捕获命令式版本的所有行为。

(Java Stream API Shortcomings)

Most notably, exception handling seems to be left out from the Stream API. We’d have to resort to our own ways of handling exceptions thrown in each step of the operator chain. The way we’ve chosen to solve this is simply converting checked exceptions to unchecked ones so that it can be caught outside of the operator chain. A side effect of handling the exception outside of the operator chain is that we can’t capture the states when the exception occurs for reporting purposes.

最值得注意的是,Stream API似乎没有处理异常。 我们必须采用我们自己的方式来处理在操作员链的每个步骤中引发的异常。 我们选择解决此问题的方法只是将已检查的异常转换为未检查的异常,以便可以将其捕获到运算符链之外。 在操作员链之外处理异常的副作用是,当出于报告目的而发生异常时,我们无法捕获状态。

Secondly, any resource initializations required, e.g. database connections and prepared statements, cannot be captured by the operator chain. You’ll notice that getting a database connection and preparing the insert statement is still done outside of Stream API in the example. While this works, it would be ideal to be able to let the stream initialize resources in the beginning, and close them afterwards.

其次,操作员链无法捕获所需的任何资源初始化,例如数据库连接和准备好的语句。 您会注意到,在示例中,仍然可以在Stream API之外完成获取数据库连接和准备插入语句的操作。 在此过程中,理想的是能够让流在开始时初始化资源,然后再关闭它们。

There is a solution to these two problems using another streaming library. However, that’s beyond the scope of this article, and perhaps we can look into that in a future article.

使用另一个流媒体库可以解决这两个问题。 但是,这超出了本文的范围,也许我们可以在以后的文章中对此进行研究。

Full source code of examples in this article can be accessed here: https://gitlab.com/yongtze/functional-file-ingest

可以在此处访问本文示例的完整源代码: https : //gitlab.com/yongtze/functional-file-ingest

翻译自: https://medium.com/swlh/a-journey-from-imperative-to-functional-in-java-4e40c32a2251

命令行输入java命令