Spring Boot Docker

Many people are using containers to wrap their Spring Boot applications, and building containers is not a simple thing to do.
许多人正在使用容器打包他们的Spring Boot应用程序,但是构建容器并不是一件简单的事情。

This is a guide for developers of Spring Boot applications, and containers are not always a good abstraction for developers - they force you to learn about and think about very low level concerns - but you will on occasion be called on to create or use a container, so it pays to understand the building blocks.
这是一篇写给Spring Boot 应用程序开发人员的指南,容器对于开发人员来说并不总是一个好的抽象-它们迫使你去学习思考非常低层次的问题-但是你有时又需要去创建或使用容器,所以理解构造模块是有好处的。

Here we aim to show you some of the choices you can make if you are faced with the prospect of needing to create your own container.
这里我们旨在向你展示几种当你面临需要创建你自己的容器的时候所能够做出的选择。

We will assume that you know how to create and build a basic Spring Boot application. If you don’t, go to one of the Getting Started Guides, for example the one on building a REST Service.
我们假设你知道如何去创建和构建一个基础的Spring Boot应用程序。如果你不知道,建议你去找一个入门指南Getting Started Guides, 例如那个关于构建REST服务REST Service的指南.

Copy the code from there and practise with some of the ideas below.
从那里复制代码,并实践下面的一些思想。

There is also a Getting Started Guide on Docker, which would also be a good starting point, but it doesn’t cover the range of choices that we have here, or in as much detail.

这里也有一片关于Docker的入门指南,也是一个不错的入门起点,但是并没有像我们这里覆盖那么多的选择,以及尽可能多的细节。

A Basic Dockerfile

A Spring Boot application is easy to convert into an executable JAR file.
一个Spring Boot 应用程序能够非常容易的被打包为一个可执行的JAR文件。

All the Getting Started Guides do this, and every app that you download from Spring Initializr will have a build step to create an executable JAR.
所有的入门指南都会介绍如何将Spring Boot应用程序打包为一个可执行的Jar文件,并且每个你从Spring Initializr网站上下载的应用程序都有创建一个可执行JAR文件的步骤。

With Maven you ./mvnw install and with Gradle you ./gradlew build
Maven使用 ./mvnw install 创建可执行jar文件,Gradle使用 ./gradlew build创建可执行jar文件

A basic Dockerfile to run that JAR would then look like this, at the top level of your project:
在项目的顶层,一个基础的运行JAR文件的Dockerfile如下:

FROM openjdk:8-jdk-alpine			#使用openjdk:8-jdk-alpine作为基础镜像
VOLUME /tmp							#设置与主机的共享目录tmp
ARG JAR_FILE						#定义JAR_FILE变量,在使用docker build时可以使用--build-arg <varname>=<value>对他赋值
COPY ${JAR_FILE} app.jar			#将JAR_FILE路径下的文件拷贝成app.jar文件
ENTRYPOINT ["java","-jar","/app.jar"]#使用java -jar命令运行app.jar文件

The JAR_FILE could be passed in as part of the docker command (it will be different for Maven and Gradle). E.g. for Maven:
JAR_FILE变量能够作为docker命令的一部分进行传递(Maven和Gradle的用法会有所不同). Maven如下:

# build 命令用于创建镜像,--build-args选项这里用于指定jar的文件路径,-t命令用于设置镜像标签,.其实是DockerFile的路径
# 这里就是利用当前目录下的DockerFile文件创建标签为myorg/myapp的镜像,并将jar文件的路径传入到DockerFile中
$ docker build --build-args=target/*.jar -t myorg/myapp .

# 在本地实验环境 Docker version 19.03.2中
docker build --build-arg JAR_FILE=target/*.jar -t myorg/myapp .

Gradle 如下:

# SpringBoot官网命令
$ docker build --build-args=build/libs/*.jar -t myorg/myapp .

Of course, once you have chosen a build system, you don’t need the ARG - you can just hard code the jar location. E.g. for Maven:
当然,一旦你已经选择了一个构建系统,你可以不需要使用ARG-你能够直接硬编码jar的位置。例如Maven:

FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY target/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]

Then we can simply build an image with
这个时候我们能够轻松的利用如下命令构建一个镜像

$ docker build -t myorg/myapp .

and run it like this:
并且向下面一样运行它:

# run命令用于运行镜像(也叫创建容器),并将主机的8080端口和容器的8080端口连接
$ docker run -p 8080:8080 myorg/myapp

docker修改arl灯塔里面的配置 docker build args_Spring Boot Docker


If you want to poke around inside the image you can open a shell in it like this (the base image does not have bash):

如果你想浏览镜像里的内容你可以像下面一样打开一个shell(基础镜像是没有bash shell的):

docker run -ti --entrypoint /bin/sh myorg/myapp

docker修改arl灯塔里面的配置 docker build args_docker修改arl灯塔里面的配置_02


The docker configuration is very simple so far, and the generated image is not very efficient.

到目前为止docker的配置非常简单,而且生成的镜像也不是非常高效。

The docker image has a single filesystem layer with the fat jar in it, and every change we make to the application code changes that layer, which might be 10MB or more (even as much as 50MB for some apps).
docker镜像有一个单文件系统层里面有一个占据很大空间的jar包,但是每一次我们对应用程序代码做出的修改都会改变单文件系统层,这可能会占据10MB的空间,甚至更多(甚至有些应用程序要占用50MB的空间)。

We can improve on that by splitting the JAR up into multiple layers.
我们能够通过将JAR文件拆分为多层来改进它。

Smaller Images(JRE)

Notice that the base image in the example above is openjdk:8-jdk-alpine.
请注意上面例子中使用的基础镜像是openjdk:8-jdk-alpine。

The alpine images are smaller than the standard openjdk library images from Dockerhub.
alpine镜像比来自Dockerhub的标准openjdk库镜像更小。

There is no official alpine image for Java 11 yet (AdoptOpenJDK had one for a while but it no longer appears on their Dockerhub page).
现在还没有官方的java 11的alpine镜像(AdoptOpenJDK 曾经有过一段时间,但是已经不再出现在他们的Dockerhub page上了)。

You can also save about 20MB in the base image by using the “jre” label instead of “jdk”.
你能够通过使用jre而不是jdk为基础镜像节省大约20MB的空间。

Not all apps work with a JRE (as opposed to a JDK), but most do, and indeed some organizations enforce a rule that every app has to because of the risk of misuse of some of the JDK features (like compilation).
并不是所有的应用程序都使用JRE(和JDK相反),但是大多数都使用了JRE,并且确实有一些组织机构强制要求每一个应用程序必须使用JRE,以防止因为滥用JDK的某些特性(例如编译)所带来的风险。

Another trick that could get you a smaller image is to use JLink, which is bundled with OpenJDK 11.
另一能让你获得一个更小的镜像的技巧是使用捆绑在OpenJDK 11中的JLink

JLink allows you to build a custom JRE distribution from a subset of modules in the full JDK, so you don’t need a JRE or JDK in the base image.
JLink允许你从一个完整的JDK模块子集中构建出一个定制的JRE分发版,因此在基础镜像中可以不需要一个JRE或者JDK。

In principle this would get you a smaller total image size than using the openjdk official docker images.
大体上这会让你获得一个比使用openjdk的官方docker镜像更小的镜像。

In practice, you won’t (yet) be able to use the alpine base image with JDK 11, so your choice of base image will be limited and will probably result in a larger final image size.
实际上,你还不能使用JDK 11alpine版本的基础镜像,所以你对与基础镜像的选择将受到限制并且最终得到一个更大的镜像。

Also, a custom JRE in your own base image cannot be shared amongst other applications, since they would need different customizations.
另外,一个在你自己的基础镜像中定制的JRE不能够被大多数的其它镜像共享,因为他们可能需要不同程度的定制。

So you might have smaller images for all your applications, but they still take longer to start because they don’t benefit from caching the JRE layer.
因此,可能你的所有应用程序都会有更小的镜像,但是他们的启动时间仍然很长因为他们无法从JRE层的缓存中收益。

That last point highlights a really important concern for image builders: the goal is not necessarily always going to be to build the smallest image possible.
最后为镜像构建者强调一个非常重要的问题:我们的目标并不一定总是构建尽可能小的镜像。

Smaller images are generally a good idea because they take less time to upload and download, but only if none of the layers in them are already cached.
构建更小的镜像通常是一个好主意这样能够花费更少的时间在上传下载上,但前提是其中所有层都没有缓存。

Image registries are quite sophisticated these days and you can easily lose the benefit of those features by trying to be clever with the image construction.
现在的镜像注册非常复杂,并且如果试图巧妙地构建映像,就很容易失去这些特性的好处。

If you use common base layers, the total size of an image is less of a concern, and will probably become even less of one as the registries and platforms evolve.
如果你使用公共基础层,就不太需要考虑一个镜像的占用空间,并且随着注册中心和平台的发展,镜像甚至将可能会变的更小。

Having said that, it is still important, and useful, to try and optimize the layers in our application image, but the goal should always be to put the fastest changing stuff in the highest layers, and to share as many of the large, lower layers as possible with other applications.
话虽如此,尝试和优化应用程序镜像中所有的层仍然重要和有用,但是我们的目标应该总是把变化最快的东西放在最高层,并且尽可能的和其他应用程序共享大型的,较低的层。

A Better Dockerfile

A Spring Boot fat jar naturally has “layers” because of the way that the jar itself is packaged.
一个SpringBoot的jar文件天生有“层”的概念,因为jar文件它本身就是打包的。

If we unpack it first it will already be divided into external and internal dependencies.
如果我们先解压jar包,它已经被拆分为外部依赖和内部依赖。

To do this in one step in the docker build, we need to unpack the jar first.
要在docker构建的一个步骤中完成此操作,我们需要先解压jar包。

For example (sticking with Maven, but the Gradle version is pretty similar):
例如(仍然使用Maven, 但是Gradle版本非常类似):

$ mkdir target/dependency
$ (cd target/denpency; jar -xf ../*.jar)
$ docker build -t myorg/myapp .

with this Dockerfile

FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]

There are now 3 layers, with all the application resources in the later 2 layers.
这里现在有3层,所有的应用程序资源在后面两层中。

If the application dependencies don’t change, then the first layer (from BOOT-INF/lib) will not change, so the build will be faster, and so will the startup of the container at runtime as long as the base layers are already cached.
如果不修改应用程序的依赖,那么第一层(来自BOOT-INF/lib)将不会改变,因此程序的构建将会变得更快,只要缓存了基本层那么容器在运行时启动也是如此。

We used a hard-coded main application class hello.Application. This will probably be different for your application. You could parameterize it with another ARG if you wanted. You could also copy the Spring Boot fat JarLauncher into the image and use it to run the app - it would work and you wouldn’t need to specify the main class, but it would be a bit slower on startup.

我们使用了一个硬编码主程序类文件hello.Application。这可能和你的应用程序不同。如果你想的话,你能够使用一个ARG变量参数化它。你也能够复制Spring Boot Jar启动器到镜像中,并且通过它去运行应用程序-这将会运行成功并且你不需要去指定主类,但是它在启动的时候会有一点慢。

需要注意的是主程序类文件会和每个人的应用程序不同,我们可以参考META-INF/MANIFEST.MF中的内容,这里应填写Start-Class

docker修改arl灯塔里面的配置 docker build args_Spring Boot Docker_03


ENTRYPOINT改写为ENTRYPOINT [“java”,"-cp",“app:app/lib/*”,“com.waichan.docker.DockerApplication”]

Tweaks

If you want to start your app as quickly as possible (most people do) there are some tweaks you might consider. Here are some ideas
如果你想尽可能快的启动你的应用程序(大多数人想要这么做),有一些调整你可以考虑一下。这里有些建议:

  • Use the spring-context-indexer (link to docs). It’s not going to add much for small apps, but every little helps.
  • 使用spring-context-indexer。它不会让小应用程序启动加速很多,但是没一点都有帮助。
  • Don’t use actuators if you can afford not to.
  • 如果可以的话不要使用执行器。
  • Use Spring Boot 2.1 and Spring 5.1.
  • 使用Spring Boot 2.1 和 Spring 5.1版本。
  • Fix the location of the Spring Boot config file(s) with spring.config.location (command line argument or System property etc.).
  • 用spring.config.location(命令行参数或系统属性等)修复Spring Boot配置文件的位置。
  • Switch off JMX - you probably don’t need it in a container - with spring.jmx.enabled=false
  • 关闭JMX-在一个容器中你很可能不需要开启JMX - 使用spring.jmx.enabled=false配置。
  • Run the JVM with -noverify. Also consider -XX:TieredStopAtLevel=1 (that will slow down the JIT later at the expense of the saved startup time).
  • 使用-noverify选项运行JVM。也可以考虑-XX:TieredStopAlLevel=1参数配置(将会以降低JIT时间为代价节省启动时间)
  • Use the container memory hints for Java 8: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap. With Java 11 this is automatic by default.
  • 为Java 8使用容器内存提示: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap。在Java11中这些都默认自动配置。

Your app might not need a full CPU at runtime, but it will need multiple CPUs to start up as quickly as possible (at least 2, 4 are better).
在运行时你的应用程序可能不需要一整块CPU,但是为了尽可能快的启动应用程序应用程序可能需要多块CPU。

If you don’t mind a slower startup you could throttle the CPUs down below 4.
如果你不建议更慢的启动速度你可以将CPU的数量缩减到4块以下。

If you are forced to start with less than 4 CPUs it might help to set -Dspring.backgroundpreinitializer.ignore=true since it prevents Spring Boot from creating a new thread that it probably won’t be able to use (this works with Spring Boot 2.1.0 and above).
如果你被强制要求使用数量少于4块的CPU启动应用程序,设置Dspring.backgroundpreinitializer.ignore=true可能会有所帮助它能够防止Spring Boot框架创建可能不被使用的新线程(使用Spring Boot 2.1.0或者更高版本可以使用)。

Multi-Stage Build

The Dockerfile above assumed that the fat JAR was already built on the command line.
上面的Dockerfile文件假设臃肿的JAR文件已经使用命令行构建。

You can also do that step in docker using a multi-stage build, copying the result from one image to another. Example, using Maven:
你也能够通过使用多级构建在docker中完成那些步骤,将结果从一个镜像拷贝到另一个。例如,使用Maven:

FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app

COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src

RUN ./mvnw install -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)

FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]

The first image is labelled “build” and it is used to run Maven and build the fat jar, then unpack it.
将第一个镜像定义为build,使用它来运行Maven和构建jar包,并且解压它。

The unpacking could also be done by Maven or Gradle (this is the approach taken in the Getting Started Guide) - there really isn’t much difference, except that the build configuration would have to be edited and a plugin added.
解压缩的操作同样也能够被Maven或者Gradle执行(这是在入门指南中被采用的方法)- 这里并没有太多的不同,除非构建配置将被修改或者一个插件被添加。

Notice that the source code has been split into 4 layers.
请注意源代码已经被分为4层。

The later layers contain the build configuration and the source code for the app, and the earlier layers contain the build system itself (the Maven wrapper).
最底层包含了应用程序的构建配置和源代码,最顶层包含了构建系统本身(Maven打包工具)。

This is a small optimization, and it also means that we don’t have to copy the target directory to a docker image, even a temporary one used for the build.
这是一种小的优化,这意味着我们不需要再将target目录拷贝到docker镜像中,甚至是用于构建的临时组件。

Every build where the source code changes will be slow because the Maven cache has to be re-created in the first RUN section.
每次构建源代码更改都会很慢,因为Maven缓存必须在第一个RUN部分中重新创建。

But you have a completely standalone build that anyone can run to get your application running as long as they have docker.
但是你有一份完整独立的构建使得任何人都能够运行你的应用程序只要它们拥有docker。

That can be quite useful in some environments, e.g. where you need to share your code with people who don’t know Java.
这在一些环境中会变的非常有用,例如,你可以和不懂Java的人分享代码。

Experimental Features

Docker 18.06 comes with some “experimental” features that includes a way to cache build dependencies.
Docker 18.06 带来了一些实验特性,里面包含了缓存构建依赖的方法。

To switch them on you need a flag in the daemon (dockerd) and also an environment variable when you run the client, and then you can add a magic first line to your
当你运行客户端的时候,为了开启实验特性你需要守护进程的一个标志和一个环境变量。

and then you can add a magic first line to your Dokerfile:
那时你能够添加神奇的一行到你的Dockerfile中

Dockerfile

# syntax=docker/dockerfile:experimental

and the RUN directive then accepts a new flag --mount. Here’s a full example:
并且RUN指令接收一个新的标志–mount。下面时完整的例子:

# syntax=docker/dockerfile:experimental
FROM openjdk:8-jdk-alpine as build
WORKDIR /workspace/app

COPY mvnw .
COPY .mvn .mvn
COPY pom.xml .
COPY src src

RUN --mount=type=cache,target=/root/.m2 ./mvnw install -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)

FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=/workspace/app/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","hello.Application"]

Then run it:
运行它:

$ DOCKER_BUILDKIT=1 docker build -t myorg/myapp .
...
 => /bin/sh -c ./mvnw install -DskipTests              5.7s
 => exporting to image                                 0.0s
 => => exporting layers                                0.0s
 => => writing image sha256:3defa...
 => => naming to docker.io/myorg/myapp

With the experimental features you get a different output on the console, but you can see that a Maven build now only takes a few seconds instead of minutes, once the cache is warm.
使用实验特性你能够在控制台上获得不同的输出,同时你会发现Maven构建现在只花费几秒钟而不是几分钟,只要启动了缓存。

While these features are in the experimental phase, the options for switching buildkit on and off depend on the version of docker that you are using.
这些特性还在实验阶段,开启和关闭构建构建工具的选线取决于你当前正在使用的docker版本。

Check the documentation for the version you have (the example above is correct for docker 18.0.6).
查询你现在正在使用的docker版本的文档(上面的例子在docker 18.0.6实验正确)

Build Plugins

If you don’t want to call docker directly in your build, there is quite a rich set of plugins for Maven and Gradle that can do that work for you. Here are just a few.
如果在构建的时候你不想直接调用docker命令,这里有非常丰富的Maven和Gradle插件集合能够帮助你完成那些工作。下面列出一些。

Spotify Maven Plugin(使用上述第二种Dockerfile进行实验)

The Spotify Maven Plugin is a popular choice. It requires the application developer to write a Dockerfile and then runs docker for you, just as if you were doing it on the command line.
Spotify Maven Plugin 是一个受欢迎的选择。它要求应用程序开发者为你编写Dockerfile同时运行docker,就好像你在命令行上操作一样。

There are some configuration options for the docker image tag and other stuff, but it keeps the docker knowledge in your application concentrated in a Dockerfile, which many people like.
这里有一些docker镜像标签和其他东西的配置选项,但是它将应用程序中的docker知识集中在Dockerfile中,很多人都喜欢这样。

For really basic usage it will work out of the box with no extra configuration:
对于真正的基础用法,它将做到开箱即用而不需要额外的配置:

$ mvn com.spotify:dockerfile-maven-plugin:build
...
[INFO] Building Docker context /home/dsyer/dev/demo/workspace/myapp
[INFO]
[INFO] Image will be built without a name
[INFO]
...
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.630 s
[INFO] Finished at: 2018-11-06T16:03:16+00:00
[INFO] Final Memory: 26M/595M
[INFO] ------------------------------------------------------------------------

That builds an anonymous docker image. We can tag it with docker on the command line now, or use Maven configuration to set it as the repository. Example (without changing the pom.xml):
这将构建一个无名的docker镜像。我们现在可以在命令行上用docker标记它,或者使用Maven配置将其设置为仓库。例如(不修改pom.xml)

$ mvn com.spotify:dockerfile-maven-plugin:build -Ddockerfile.repository=myorg/myapp

Or in the pom.xml:
或者在pom.xml中修改:

<build>
    <plugins>
        <plugin>
            <groupId>com.spotify</groupId>
            <artifactId>dockerfile-maven-plugin</artifactId>
            <version>1.4.8</version>
            <configuration>
                <repository>myorg/${project.artifactId}</repository>
            </configuration>
        </plugin>
    </plugins>
</build>

Continuous Integration

Automation is part of every application lifecycle these days (or should be).
目前自动化是每个应用程序生命周期的一部分(或者应该是)。

The tools that people use to do the automation tend to be quite good at just invoking the build system from the source code.
人们使用来完成自动化的工具,趋向于从源代码调用构建系统。

So if that gets you a docker image, and the environment in the build agents is sufficiently aligned with developer’s own environment, that might be good enough.
如果这让你得到一个docker镜像,并且在构建代理中的环境和开发者自己的环境完全一致,这也许足够好。

Authenticating to the docker registry is likely to be the biggest challenge, but there are features in all the automation tools to help with that.
注册中心的身份认证似乎会是一个巨大的挑战,但是在所有的自动化工具中都有一些特性帮助完成这项任务。

However, sometimes it is better to leave container creation completely to an automation layer, in which case the user’s code might not need to be polluted.
然而,有些时候让容器创建完全离开自动化层会更好,在这种情况下用户代码可能不需要被污染。

Container creation is tricky, and developers sometimes don’t really care about it.
容器创建很棘手,并且有时候开发者并不关心它。

If the user code is cleaner there is more chance that a different tool can “do the right thing”, applying security fixes, optimizing caches etc.
如果用户的代码更加干净,这里有更多的机会让不同的工具能“做正确的事”,应用安全补丁,优化缓存等。

There are multiple options for automation and they will all come with some features related to containers these days. We are just going to look at a couple.
自动化有多种选择,并且目前它们全部将带来一些和容器相关的特性。我们只看几个例子。

Buildpacks

Cloud Foundry has used containers internally for many years now, and part of the technology used to transform user code into containers is Build Packs, an idea originally borrowed from Heroku.
Cloud Foundry已经在内部使用容器很多年,Build Packs技术的一部分被用来将用户代码转换为容器,这个想法来自Heroku

The current generation of buildpacks (v2) generates generic binary output that is assembled into a container by the platform.
当前版本的buildpacks(v2)生成通用二进制输出,被平台组装到一个容器中。

The new generation of buildpacks (v3) is a collaboration between Heroku and other companies including Pivotal, and it builds container images directly and explicitly.
新版本的buildpacks(v3)由Heroku和其他公司包括Pivotal合作完成,它能够直接明白地构建容器镜像。

This is very interesting for developers and operators.
这让开发人员和运营人员感到兴趣。

Developers don’t need to care so much about the details of how to build a container, but they can easily create one if they need to.
开发人员不需要如何去构建容器地细节,同时如果他们需要他们可以轻松地去创建一个容器。

Buildpacks also have lots of features for caching build results and dependencies, so often a buildpack will run much quicker than a native docker build.
Buildpacks 有许多缓存构建结果和依赖地特性,因此往往一个buildpack比本地docker构建运行地更快。

Operators can scan the containers to audit their contents and transform them to patch them for security updates.
运营人员能够扫描容器以审核它们地内容,将它们转换为安全补丁以便进行安全更新。

And you can run the buildpacks locally (e.g. on a developer machine, or in a CI service), or in a platform like Cloud Foundry.
并且你能够在本地运行buildpacks(例如在开发者机器上,或者在一个CI服务),或者像在paltform这样地平台上。

The output from a buildpack lifecycle is a container image, but you don’t need docker or a Dockerfile, so it’s CI and automation friendly.
一个buildpack生命周期输出一个容器镜像,但是你不需要docker或者一个Dockerfile,所以他是CI和自动化友好的。

The filesystem layers in the output image are controlled by the buildpack, and typically many optimizations will be made without the developer having to know or care about them.
输出镜像的文件系统层受buildpack控制,并且在没有开发人员的情况下需要去了解和关心的许多优化将被完成。

There is also an Application Binary Interface between the lower level layers, like the base image containing the operating system, and the upper layers, containing middleware and language specific dependencies.
在底层有一个二进制应用程序接口,例如基础镜像包含操作系统,上层,包含中间件和特定于语言的依赖项。

This makes it possible for a platform, like Cloud Foundry, to patch lower layers if there are security updates without affecting the integrity and functionality of the application.
这使得平台化成为可能,例如Cloud Foundry,修补下层如果这里有安全更新而不会影响到应用程序的完整性和功能。