1. 运行架构图
    Apache Spark 2.3 运行在Kubernete实战
  2. 下载编译
    2.1 下载源代码,并解压
    下载地址
tar -zxvf v2.3.2.tar.gz

2.2 编译

cd spark-2.3.2
build/mvn install -DskipTests
build/mvn compile -Pkubernetes -pl resource-managers/kubernetes/core -am -DskipTests
build/mvn install -Pkubernetes -pl resource-managers/kubernetes/core -am -DskipTests

[root@compile spark-2.3.2]# ls assembly/target/scala-2.11/jars/ -la|grep spark-kub*
-rw-r--r-- 1 root root   381120 Sep 26 09:56 spark-kubernetes_2.11-2.3.2.jar

dev/make-distribution.sh --tgz -Phadoop-2.7 -Pkubernetes

构建支持R语言和hive的tar

./dev/make-distribution.sh --name inspur-spark --pip --r --tgz -Psparkr -Phadoop-2.7 -Phive -Phive-thriftserver -Pkubernetes

出错:

++ echo 'Cannot find '\''R_HOME'\''. Please specify '\''R_HOME'\'' or make sure R is properly installed.'
Cannot find 'R_HOME'. Please specify 'R_HOME' or make sure R is properly installed.

此次我们只测试Spark running on kubernetes,因此暂时不需要解决此问题。

  1. 构建Docker Image
    ./bin/docker-image-tool.sh -r bigdata.registry.com:5000 -t 2.3.2 build
    ./bin/docker-image-tool.sh -r bigdata.registry.com:5000 -t 2.3.2 push

    在构建image时可能会连接不上安装源dl-cdn.alpinelinux.org,修改为使用阿里云的安装源:
    修改./resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile

    RUN set -ex && \
    sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories && \
    apk upgrade --no-cache && \
    apk add --no-cache bash tini libc6-compat linux-pam && \
    mkdir -p /opt/spark && \
    mkdir -p /opt/spark/work-dir && \
    touch /opt/spark/RELEASE && \
    rm /bin/sh && \
    ln -sv /bin/bash /bin/sh && \
    echo "auth required pam_wheel.so use_uid" >> /etc/pam.d/su && \
    chgrp root /etc/passwd && chmod ug+rw /etc/passwd

    由于本地的私有harbor中创建了仓库insight
    因此,执行如下命令push Image:

    docker tag bigdata.registry.com:5000/spark:2.3.2 bigdata.registry.com:5000/insight/spark:2.3.2
    docker push  bigdata.registry.com:5000/insight/spark:2.3.2
  2. 将examples.jar上传至httpd服务中
    [root@compile spark-2.3.2]# ll dist/examples/jars/spark-examples_2.11-2.3.2.jar 
    -rw-r--r-- 1 root root 1997551 Sep 26 09:56 dist/examples/jars/spark-examples_2.11-2.3.2.jar
    [root@compile spark-2.3.2]# cp dist/examples/jars/spark-examples_2.11-2.3.2.jar /opt/mnt/www/html/spark/
    [root@compile spark-2.3.2]# ll /opt/mnt/www/html/spark/
    -rw-r--r-- 1 root root  1997551 Sep 26 10:26 spark-examples_2.11-2.3.2.jar
  3. 准备kubernetes环境,即授权
    kubectl create serviceaccount spark -nspark
    kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=spark:spark --namespace=spark

    --seriveaccount=spark:spark 前一个spark是指namespace, 后一个spark是指serviceaccount

  4. 测试
    bin/spark-submit \
    --master k8s://http://10.221.129.20:8080 \
    --deploy-mode cluster \
    --name spark-pi \
    --class org.apache.spark.examples.SparkPi \
    --conf spark.executor.instances=1 \
    --conf spark.kubernetes.container.image=bigdata.registry.com:5000/insight/spark:2.3.2 \
      --conf spark.kubernetes.namespace=spark \
      --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
    http://10.221.129.22/spark/spark-examples_2.11-2.3.2.jar

    Apache Spark 2.3 运行在Kubernete实战
    运行日志:

    2018-09-26 10:27:54 WARN Utils:66 - Kubernetes master URL uses HTTP instead of HTTPS.
    2018-09-26 10:28:25 WARN Config:347 - Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
    2018-09-26 10:28:27 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
    pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
    namespace: default
    labels: spark-app-selector -> spark-74d52904a3794e8986895a12322c5cd9, spark-role -> driver
    pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
    creation time: 2018-09-26T02:28:27Z
    service account name: default
    volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
    node name: N/A
    start time: N/A
    container images: N/A
    phase: Pending
    status: []
    2018-09-26 10:28:27 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
    pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
    namespace: default
    labels: spark-app-selector -> spark-74d52904a3794e8986895a12322c5cd9, spark-role -> driver
    pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
    creation time: 2018-09-26T02:28:27Z
    service account name: default
    volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
    node name: master2
    start time: N/A
    container images: N/A
    phase: Pending
    status: []
    2018-09-26 10:28:27 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
    pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
    namespace: default
    labels: spark-app-selector -> spark-74d52904a3794e8986895a12322c5cd9, spark-role -> driver
    pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
    creation time: 2018-09-26T02:28:27Z
    service account name: default
    volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
    node name: master2
    start time: 2018-09-26T02:28:27Z
    container images: bigdata.registry.com:5000/insight/spark:2.3.2
    phase: Pending
    status: [ContainerStatus(containerID=null, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=null, reason=PodInitializing, additionalProperties={}), additionalProperties={}), additionalProperties={})]
    2018-09-26 10:28:28 INFO Client:54 - Waiting for application spark-pi to finish...
    2018-09-26 10:28:51 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
    pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
    namespace: default
    labels: spark-app-selector -> spark-74d52904a3794e8986895a12322c5cd9, spark-role -> driver
    pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
    creation time: 2018-09-26T02:28:27Z
    service account name: default
    volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
    node name: master2
    start time: 2018-09-26T02:28:27Z
    container images: bigdata.registry.com:5000/insight/spark:2.3.2
    phase: Pending
    status: [ContainerStatus(containerID=null, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=null, reason=PodInitializing, additionalProperties={}), additionalProperties={}), additionalProperties={})]
    2018-09-26 10:28:56 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
    pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
    namespace: default
    labels: spark-app-selector -> spark-74d52904a3794e8986895a12322c5cd9, spark-role -> driver
    pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
    creation time: 2018-09-26T02:28:27Z
    service account name: default
    volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
    node name: master2
    start time: 2018-09-26T02:28:27Z
    container images: bigdata.registry.com:5000/insight/spark:2.3.2
    phase: Pending
    status: [ContainerStatus(containerID=null, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=null, reason=PodInitializing, additionalProperties={}), additionalProperties={}), additionalProperties={})]
    2018-09-26 10:28:57 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
    pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
    namespace: default
    labels: spark-app-selector -> spark-74d52904a3794e8986895a12322c5cd9, spark-role -> driver
    pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
    creation time: 2018-09-26T02:28:27Z
    service account name: default
    volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
    node name: master2
    start time: 2018-09-26T02:28:27Z
    container images: bigdata.registry.com:5000/insight/spark:2.3.2
    phase: Running
    status: [ContainerStatus(containerID=docker://3abe8f7ac19d2f52ed3ba84e32e076268ae0dfde83ff0a75b2359924d3bac412, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=docker-pullable://bigdata.registry.com:5000/insight/spark@sha256:0bfd1a27778f97a1ec620446b599d9f1fda882e8c3945a04ce8435356a40efe8, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=true, restartCount=0, state=ContainerState(running=ContainerStateRunning(startedAt=Time(time=2018-09-26T02:28:57Z, additionalProperties={}), additionalProperties={}), terminated=null, waiting=null, additionalProperties={}), additionalProperties={})]
    2018-09-26 10:29:05 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
    pod name: spark-pi-7b0ffe8a4023370a872acdd679f024b1-driver
    namespace: default
    labels: spark-app-selector -> spark-74d52904a3794e8986895a12322c5cd9, spark-role -> driver
    pod uid: d9bce33c-c133-11e8-b988-fa163e609d06
    creation time: 2018-09-26T02:28:27Z
    service account name: default
    volumes: spark-init-properties, download-jars-volume, download-files-volume, default-token-7mnhw
    node name: master2
    start time: 2018-09-26T02:28:27Z
    container images: bigdata.registry.com:5000/insight/spark:2.3.2
    phase: Failed
    status: [ContainerStatus(containerID=docker://3abe8f7ac19d2f52ed3ba84e32e076268ae0dfde83ff0a75b2359924d3bac412, image=bigdata.registry.com:5000/insight/spark:2.3.2, imageID=docker-pullable://bigdata.registry.com:5000/insight/spark@sha256:0bfd1a27778f97a1ec620446b599d9f1fda882e8c3945a04ce8435356a40efe8, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=ContainerStateTerminated(containerID=docker://3abe8f7ac19d2f52ed3ba84e32e076268ae0dfde83ff0a75b2359924d3bac412, exitCode=1, finishedAt=Time(time=2018-09-26T02:29:04Z, additionalProperties={}), message=null, reason=Error, signal=null, startedAt=Time(time=2018-09-26T02:28:57Z, additionalProperties={}), additionalProperties={}), waiting=null, additionalProperties={}), additionalProperties={})]
    2018-09-26 10:29:05 INFO LoggingPodStatusWatcherImpl:54 - Container final statuses:
    Container name: spark-kubernetes-driver
    Container image: bigdata.registry.com:5000/insight/spark:2.3.2
    Container state: Terminated
    Exit code: 1
    2018-09-26 10:29:05 INFO Client:54 - Application spark-pi finished.
    2018-09-26 10:29:05 INFO ShutdownHookManager:54 - Shutdown hook called
    2018-09-26 10:29:05 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-53c85221-619e-41c6-8b94-80b950852b7e

编码提交:

val args = Array(  //10.110.25.114 //10.221.129.20
      "--master","k8s://http://10.221.129.20:8080",

      "--deploy-mode", "cluster",
      "--name","spark-pi",
      "--class", "org.apache.spark.examples.SparkPi",
      "--conf", "spark.kubernetes.container.image=bigdata.registry.com:5000/insight/spark:2.3.2",
      "--conf","spark.kubernetes.container.image.pullPolicy=Always",
      "--conf", "spark.kubernetes.namespace=spark",
      "--conf","spark.executor.instances=1",

      "--conf", "spark.kubernetes.authenticate.driver.serviceAccountName=spark",

      "http://10.221.129.22/spark/spark-examples_2.11-2.3.2.jar",
      "1000"
    )
    for (arg <- args) {
      System.out.println(arg)
    }
    SparkSubmit.main(args)
    System.out.println("----------------Submitted----------------")

报错信息:

Error: Could not find or load main class org.apache.spark.examples.SparkPi

比较如下日志:

exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -Dspark.app.name=spark-pi -Dspark.submit.deployMode=cluster -Dspark.driver.blockManager.port=7079 -Dspark.kubernetes.authenticate.driver.serviceAccountName=spark -Dspark.kubernetes.container.image=bigdata.registry.com:5000/insight/spark:2.3.2 -Dspark.driver.port=7078 -Dspark.jars=http://10.221.129.22/spark/spark-examples_2.11-2.3.2.jar,http://10.221.129.22/spark/spark-examples_2.11-2.3.2.jar -Dspark.kubernetes.namespace=spark -Dspark.master=k8s://http://10.221.129.20:8080 -Dspark.kubernetes.initContainer.configMapKey=spark-init.properties -Dspark.kubernetes.driver.pod.name=spark-pi-ab8c723b183c33cd9c4512efa77a9fc4-driver -Dspark.app.id=spark-3501543884294635931abc16b400ed33 -Dspark.driver.host=spark-pi-ab8c723b183c33cd9c4512efa77a9fc4-driver-svc.spark.svc -Dspark.executor.instances=1 -Dspark.kubernetes.executor.podNamePrefix=spark-pi-ab8c723b183c33cd9c4512efa77a9fc4 -Dspark.kubernetes.initContainer.configMapName=spark-pi-ab8c723b183c33cd9c4512efa77a9fc4-init-config -Dspark.kubernetes.container.image.pullPolicy=Always -cp ':/opt/spark/jars/*:/var/spark-data/spark-jars/spark-examples_2.11-2.3.2.jar;/var/spark-data/spark-jars/spark-examples_2.11-2.3.2.jar' -Xms1g -Xmx1g -Dspark.driver.bindAddress=158.158.104.125 org.apache.spark.examples.SparkPi 1000

exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -Dspark.app.name=spark-pi -Dspark.submit.deployMode=cluster -Dspark.driver.blockManager.port=7079 -Dspark.kubernetes.authenticate.driver.serviceAccountName=spark -Dspark.kubernetes.container.image=bigdata.registry.com:5000/insight/spark:2.3.2 -Dspark.app.id=spark-e234fa6401bf4136bd1df1c5d4521b49 -Dspark.driver.port=7078 -Dspark.jars=http://10.221.129.22/spark/spark-examples_2.11-2.3.2.jar,http://10.221.129.22/spark/spark-examples_2.11-2.3.2.jar -Dspark.kubernetes.namespace=spark -Dspark.master=k8s://http://10.221.129.20:8080 -Dspark.driver.host=spark-pi-ccf29e2a29ac39479de5bc4b5cc9d179-driver-svc.spark.svc -Dspark.kubernetes.initContainer.configMapKey=spark-init.properties -Dspark.kubernetes.driver.pod.name=spark-pi-ccf29e2a29ac39479de5bc4b5cc9d179-driver -Dspark.executor.instances=1 -Dspark.kubernetes.initContainer.configMapName=spark-pi-ccf29e2a29ac39479de5bc4b5cc9d179-init-config -Dspark.kubernetes.executor.podNamePrefix=spark-pi-ccf29e2a29ac39479de5bc4b5cc9d179 -cp ':/opt/spark/jars/*:/var/spark-data/spark-jars/spark-examples_2.11-2.3.2.jar:/var/spark-data/spark-jars/spark-examples_2.11-2.3.2.jar' -Xms1g -Xmx1g -Dspark.driver.bindAddress=158.158.104.126 org.apache.spark.examples.SparkPi 1000

中cp设置的classpath差异: 两个jar之间使用;:分隔,Linux系统中使用:分隔,代码提交使用的windows系统。
查阅相关源码:

org.apache.spark.deploy.SparkSubmitAction

Apache Spark 2.3 运行在Kubernete实战
Apache Spark 2.3 运行在Kubernete实战

org.apache.spark.deploy.k8s.submit.KubernetesClientApplication

Apache Spark 2.3 运行在Kubernete实战

Apache Spark 2.3 运行在Kubernete实战

Apache Spark 2.3 运行在Kubernete实战
Apache Spark 2.3 运行在Kubernete实战
从上述代码可以看出,/var/spark-data/spark-jars/spark-examples_2.11-2.3.2.jar加了2次到sparkJars变量中。
代码如下:

org.apache.spark.deploy.k8s.submit.steps.DependencyResolutionStep

Apache Spark 2.3 运行在Kubernete实战

org.apache.spark.deploy.k8s.submit.DriverConfigOrchestrator

Apache Spark 2.3 运行在Kubernete实战
初始化了下图中的submissionSteps
Apache Spark 2.3 运行在Kubernete实战

7.参考文档:
build-spark
running-on-kubernetes

备注:
提交自定义的spark job

bin/spark-submit \
    --master k8s://http://10.221.129.20:8080 \
    --deploy-mode cluster \
    --name rule-engine \
    --class com.inspur.iot.RuleEngine \
    --conf spark.executor.instances=1 \
    --conf spark.kubernetes.container.image=bigdata.registry.com:5000/insight/spark:2.3.2 \
    --conf spark.kubernetes.namespace=spark \
    --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
    http://10.221.129.22/spark/iot-stream-app-1.3-SNAPSHOT.jar \
    --base64=true \
--rule=c2VsZWN0IHRpbWVTdGFtcCBBcyBrZXksIGNvbmNhdF93cygifCIsIHN0YXRlLnJlcG9ydGVkLnRlbXBlcmF0dXJlLCBjbGllbnRUb2tlbikgYXMgdmFsdWUgZnJvbSB0b3BpY3M= \
--sample='{"timeStamp":1531381822,"clientToken":"clientId_lamp","state":{"reported":{"temperature":23}}}' \
--source-type=kafka \
--source='{"kafka.bootstrap.servers":"isa-kafka-svc.spark:9092","subscribe":"sensor"}' \
--sink-type=console \
--verbose