一、 架构图
二、 CRD 简介
Fluent Operator 为 Fluent Bit 和 Fluentd 分别定义了两个 Group:fluentbit.fluent.io 和 fluentd.fluent.io。
- fluentbit.fluent.io 分组下包含以下 6 个 CRDs:
Fluentbit 定义了Fluent Bit 的属性,比如镜像版本、污点、亲和性等参数。
ClusterFluentbitConfig 定义了Fluent Bit 的配置文件。
ClusterInput 定义了Fluent Bit 的 input 插件,即输入插件。通过该插件,用户可以自定义采集何种日志。
ClusterFilter 定义了Fluent Bit 的 filter 插件,该插件主要负责过滤以及处理 fluentbit 采集到的信息。
ClusterParser 定义了Fluent Bit 的 parser 插件,该插件主要负责解析日志信息,可以将日志信息解析为其他格式。
ClusterOutput 定义了Fluent Bit 的 output 插件,该插件主要负责将处理后的日志信息转发到目的地。
- fluentd.fluent.io 分组下包含以下 7 个 CRDs:
Fluentd 定义了 Fluentd 的属性,比如镜像版本、污点、亲和性等参数。
ClusterFluentdConfig 定义了 Fluentd 集群级别的配置文件。
FluentdConfig 定义了 Fluentd 的 namespace 范围的配置文件。
ClusterFilter 定义了 Fluentd 集群范围的 filter 插件,该插件主要负责过滤以及处理 Fluentd 采集到的信息。如果安装了 Fluent Bit,则可以更进一步的处理日志信息。
Filter CRD 该 定义了 Fluentd namespace 的 filter 插件,该插件主要负责过滤以及处理 Fluentd 采集到的信息。如果安装了 Fluent Bit,则可以更进一步的处理日志信息。
ClusterOutput 定义了 Fluentd 的集群范围的 output 插件,该插件主要负责将处理后的日志信息转发到目的地。
Output 定义了 Fluentd 的 namespace 范围的 output 插件,该插件主要负责将处理后的日志信息转发到目的地。
三、 fluent operator 三种工作模式
Fluent Bit only 模式 如果您只需要在收集日志并在简单处理后将日志发送到最终目的地,您只需要 Fluent Bit。
Fluentd only 模式 如果需要通过网络以 HTTP 或 Syslog 等方式接收日志,然后将日志处理并发送到最终的目的地,则只需要 Fluentd。
Fluent Bit + Fluentd 模式 如果你还需要对收集到的日志进行一些高级处理或者发送到更多的 sink,那么你可以组合使用 Fluent Bit 和 Fluentd。
四、fluent operator 实战
环境准备
- kubernetes 1.23.4 (containerd)
- helm 3.8
- elasticsearch 6.8.5
- kibana 6.8.5
- log-generator(nginx)
以下是创建log-generator(nginx)、elasticsearch、kibana、fluent-operator的脚本
mkdir -p /data/fluent-operator
cat > /data/fluent-operator/nginx.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
fluentbit.io/parser: nginx #使用nginx parser
# fluentbit.io/exclude: "true" #不收集此pod的日志
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- env:
- name: TZ
value: Asia/Shanghai
name: nginx
image: banzaicloud/log-generator:0.3.2
ports:
- containerPort: 80
EOF
kubectl apply -f /data/fluent-operator/nginx.yaml
- 部署elasticsearch
mkdir -p /var/lib/container/elasticsearch/data \
&& chmod 777 /var/lib/container/elasticsearch/data
kubectl create ns elasticsearch
cat > /data/fluent-operator/elasticsearch.yaml << 'EOF'
apiVersion: v1
kind: Secret
metadata:
name: elasticsearch-password
namespace: elasticsearch
data:
ES_PASSWORD: RWxhc3RpY3NlYXJjaDJPMjE=
ES_USER: ZWxhc3RpYw==
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
volumes:
- name: elasticsearch-data
hostPath:
path: /var/lib/container/elasticsearch/data
- name: localtime
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
containers:
- env:
- name: TZ
value: Asia/Shanghai
- name: xpack.security.enabled
value: "true"
- name: discovery.type
value: single-node
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-password
key: ES_PASSWORD
name: elasticsearch
image: elasticsearch:7.13.1
imagePullPolicy: Always
ports:
- containerPort: 9200
- containerPort: 9300
resources:
requests:
memory: 1000Mi
cpu: 200m
limits:
memory: 1000Mi
cpu: 500m
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
- name: localtime
mountPath: /etc/localtime
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
app: elasticsearch
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: elasticsearch
labels:
app: kibana
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
# nodeSelector:
# es: log
containers:
- name: kibana
image: kibana:7.13.1
resources:
limits:
cpu: 1000m
requests:
cpu: 1000m
env:
- name: TZ
value: Asia/Shanghai
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
- name: ELASTICSEARCH_USERNAME
valueFrom:
secretKeyRef:
name: elasticsearch-password
key: ES_USER
- name: ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-password
key: ES_PASSWORD
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: elasticsearch
labels:
app: kibana
spec:
ports:
- port: 5601
nodePort: 5601
type: NodePort
selector:
app: kibana
EOF
kubectl apply -f /data/fluent-operator/elasticsearch.yaml
- helm部暑fluent-operator
app=fluent-operator
version=v1.0.1
mkdir /data/$app
cd /data/$app
wget https://github.com/fluent/fluent-operator/releases/download/$version/fluent-operator.tgz
tar zxvf $app.tgz --strip-components 1 $app/values.yaml
cat > /data/$app/start.sh << EOF
helm upgrade --install --wait $app $app.tgz \
--create-namespace \
-f values.yaml \
-n $app
EOF
修改values.yaml 配置
containerRuntime: containerd
Kubernetes: true
fluentbit:
enable: false #禁止自动安装fluentbit,
fluentd:
enable: false #禁止自动安装fluentd
bash /data/fluent-operator/start.sh
4.1、 fluent-bit 工作模式(实战一)
mkdir /data/fluent-operator/fluent-bit_only -p
cat > /data/fluent-operator/fluent-bit_only/fluentbit_systemd.yaml << 'EOF'
apiVersion: fluentbit.fluent.io/v1alpha2
kind: FluentBit
metadata:
name: fluent-bit-only
namespace: fluent-operator
labels:
app.kubernetes.io/name: fluent-bit
spec:
image: kubesphere/fluent-bit:v1.9.3
positionDB:
hostPath:
path: /var/lib/fluent-bit/
resources:
requests:
cpu: 10m
memory: 25Mi
limits:
cpu: 500m
memory: 200Mi
fluentBitConfigName: fluent-bit-only-config
tolerations:
- operator: Exists
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFluentBitConfig
metadata:
name: fluent-bit-only-config
labels:
app.kubernetes.io/name: fluent-bit
spec:
service:
parsersFile: parsers.conf
inputSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit-only"
filterSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit-only"
outputSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit-only"
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:
name: systemd
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit-only"
spec:
systemd:
tag: service.kubelet
path: /var/log/journal
db: /fluent-bit/tail/systemd.db
# systemdFilter:
# - _SYSTEMD_UNIT=kubelet.service
# - _SYSTEMD_UNIT=containerd.service
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFilter
metadata:
name: systemd
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit-only"
spec:
match: service.*
filters:
- lua:
script:
key: systemd.lua
name: fluent-bit-lua
call: add_time
timeAsTable: true
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: operator
app.kubernetes.io/name: fluent-bit-lua
name: fluent-bit-lua
namespace: fluent-operator
data:
systemd.lua: |
function add_time(tag, timestamp, record)
new_record = {}
timeStr = os.date("!*t", timestamp["sec"])
t = string.format("%4d-%02d-%02dT%02d:%02d:%02d.%sZ",
timeStr["year"], timeStr["month"], timeStr["day"],
timeStr["hour"], timeStr["min"], timeStr["sec"],
timestamp["nsec"])
kubernetes = {}
kubernetes["pod_name"] = record["_HOSTNAME"]
kubernetes["container_name"] = record["SYSLOG_IDENTIFIER"]
kubernetes["namespace_name"] = "kube-system"
new_record["time"] = t
new_record["log"] = record["MESSAGE"]
new_record["kubernetes"] = kubernetes
return 1, timestamp, new_record
end
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterOutput
metadata:
name: es-systemd
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit-only"
spec:
matchRegex: (?:kube|service).(.*)
es:
host: elasticsearch.elasticsearch.svc
port: 9200
generateID: true
logstashPrefix: fluent-log-fb-system
logstashFormat: true
timeKey: "@timestamp"
EOF
kubectl apply -f /data/fluent-operator/fluent-bit_only/fluentbit_systemd.yaml
使用 Fluent Bit 收集 containerd 的应用日志并输出到 elasticsearch
mkdir /data/fluent-operator/fluent-bit_only/ -p
cat > /data/fluent-operator/fluent-bit_only/fluentbit_containerd.yaml << 'EOF'
apiVersion: fluentbit.fluent.io/v1alpha2
kind: FluentBit
metadata:
name: fluent-bit
namespace: fluent-operator
labels:
app.kubernetes.io/name: fluent-bit
spec:
image: kubesphere/fluent-bit:v1.9.3
positionDB:
hostPath:
path: /var/lib/fluent-bit/
resources:
requests:
cpu: 10m
memory: 25Mi
limits:
cpu: 500m
memory: 200Mi
fluentBitConfigName: fluent-bit-config
tolerations:
- operator: Exists
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFluentBitConfig
metadata:
name: fluent-bit-config
labels:
app.kubernetes.io/name: fluent-bit
spec:
service:
parsersFile: parsers.conf
inputSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "k8s"
filterSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "k8s"
outputSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "k8s"
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:
name: tail
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "k8s"
spec:
tail:
tag: kube.*
path: /var/log/containers/*.log
parser: cri
refreshIntervalSeconds: 10
memBufLimit: 5MB
skipLongLines: true
db: /fluent-bit/tail/pos.db
dockerMode: false
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFilter
metadata:
name: kubernetes
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "k8s"
spec:
match: kube.*
filters:
- kubernetes:
kubeURL: https://kubernetes.default.svc:443
kubeCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubeTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsVerify: false
bufferSize: 5MB
k8sLoggingParser: true #允许 Kubernetes Pods 建议一个预定义的解析器,与Kubernetes Annotations相关 Off
k8sLoggingExclude: true #允许 Kubernetes Pods 从日志处理器中排除它们的日志,与Kubernetes Annotations相关。 Off
# labels: false #在额外的元数据中包含 Kubernetes 资源标签。
# annotations: false #在额外的元数据中包含 Kubernetes 资源注释。
# mergeLog: ture #当log也是json时,它将作为日志结构的一部分追加 map 字段
# keepLog: ture
# mergeLogTrim: true #当启用 Merge_Log 时,修剪(删除可能的 \n 或 \r)字段值。
#将kubernetes块展开,并在左边添加kubernetes_前缀
- nest:
operation: lift
nestedUnder: kubernetes
addPrefix: kubernetes_
#匹配traefik的continerd_podname的入库
- grep:
regex: kubernetes_container_name (traefik|fluent-bit)
#删除以下前缀的key
- modify:
rules:
- remove: stream
- remove: kubernetes_pod_id
- remove: kubernetes_host
- remove: kubernetes_container_hash
#匹配以kubernetes_为前缀,并删除前缀
- nest:
operation: nest
wildcard:
- kubernetes_*
nestUnder: kubernetes
removePrefix: kubernetes_
#对nginx和json等日志再进一步行处理,再分析message字段
- parser:
keyName: message
parser: nginx,json
preserveKey: false #保留所有原field,
reserveData: true #保留所有原field,包括处理parser前的field
- modify:
rules:
- remove: logtag
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterOutput
metadata:
name: es
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "k8s"
spec:
matchRegex: (?:kube|service)\.(.*)
# stdout: {}
es:
host: elasticsearch.elasticsearch.svc
port: 9200
httpUser:
valueFrom:
secretKeyRef:
name: elasticsearch-password
key: ES_USER
httpPassword:
valueFrom:
secretKeyRef:
name: elasticsearch-password
key: ES_PASSWORD
generateID: true
logstashPrefix: fluent-log-fb-containerd
logstashFormat: true
timeKey: "@timestamp"
---
apiVersion: v1
kind: Secret
metadata:
name: elasticsearch-password
namespace: fluent-operator
data:
ES_PASSWORD: RWxhc3RpY3NlYXJjaDJPMjE=
ES_USER: ZWxhc3RpYw==
type: Opaque
EOF
kubectl apply -f /data/fluent-operator/fluent-bit_only/fluentbit_containerd.yaml
4.2、 fluent-bit +fluentd 工作模式(实战二)
- 部署fluentbit
cat > /data/fluent-operator/fluentbit-fluentd/fluentbit.yaml << 'EOF'
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFluentBitConfig
metadata:
name: fluent-bit-config
labels:
app.kubernetes.io/name: fluent-bit
spec:
service:
parsersFile: parsers.conf
inputSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit"
filterSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit"
outputSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit"
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: FluentBit
metadata:
name: fluent-bit
namespace: fluent-operator
labels:
app.kubernetes.io/name: fluent-bit
spec:
image: kubesphere/fluent-bit:v1.9.3
positionDB:
hostPath:
path: /var/lib/fluent-bit/
resources:
requests:
cpu: 10m
memory: 25Mi
limits:
cpu: 500m
memory: 200Mi
fluentBitConfigName: fluent-bit-config
tolerations:
- operator: Exists
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:
name: clusterinput-fluentbit-systemd
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit"
spec:
systemd:
tag: service.kubelet
path: /var/log/journal
db: /fluent-bit/tail/systemd.db
systemdFilter:
- _SYSTEMD_UNIT=kubelet.service
- _SYSTEMD_UNIT=containerd.service
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFilter
metadata:
name: clusterfilter-fluentbit-systemd
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit"
spec:
match: service.*
filters:
- lua:
script:
key: systemd.lua
name: fluent-bit-lua
call: add_time
timeAsTable: true
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: operator
app.kubernetes.io/name: fluent-bit-lua
name: fluent-bit-lua
namespace: fluent-operator
data:
systemd.lua: |
function add_time(tag, timestamp, record)
new_record = {}
timeStr = os.date("!*t", timestamp["sec"])
t = string.format("%4d-%02d-%02dT%02d:%02d:%02d.%sZ",
timeStr["year"], timeStr["month"], timeStr["day"],
timeStr["hour"], timeStr["min"], timeStr["sec"],
timestamp["nsec"])
kubernetes = {}
kubernetes["pod_name"] = record["_HOSTNAME"]
kubernetes["container_name"] = record["SYSLOG_IDENTIFIER"]
kubernetes["namespace_name"] = "kube-system"
new_record["time"] = t
new_record["log"] = record["MESSAGE"]
new_record["kubernetes"] = kubernetes
return 1, timestamp, new_record
end
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterOutput
metadata:
name: fluentd
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "fluentbit"
spec:
matchRegex: (?:kube|service)\.(.*)
forward:
host: fluentd.fluent-operator.svc
port: 24224
EOF
kubectl apply -f /data/fluent-operator/fluentbit-fluentd/fluentbit.yaml
- 部暑fluentd
cat > /data/fluent-operator/fluentbit-fluentd/fluentd.yaml << 'EOF'
apiVersion: fluentd.fluent.io/v1alpha1
kind: Fluentd
metadata:
name: fluentd
namespace: fluent-operator
labels:
app.kubernetes.io/name: fluentd
spec:
globalInputs:
- forward:
bind: 0.0.0.0
port: 24224
replicas: 1
image: kubesphere/fluentd:v1.14.4
fluentdCfgSelector:
matchLabels:
config.fluentd.fluent.io/enabled: "true"
buffer:
hostPath:
path: "/var/log/fluentd-buffer"
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: ClusterFluentdConfig
metadata:
name: cluster-fluentd-config
labels:
config.fluentd.fluent.io/enabled: "true"
spec:
watchedNamespaces:
- kube-system
- default
clusterFilterSelector:
matchLabels:
filter.fluentd.fluent.io/type: "buffer"
filter.fluentd.fluent.io/enabled: "true"
clusterOutputSelector:
matchLabels:
output.fluentd.fluent.io/scope: "cluster"
output.fluentd.fluent.io/enabled: "true"
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: ClusterFilter
metadata:
name: cluster-fluentd-filter-buffer
labels:
filter.fluentd.fluent.io/type: "buffer"
filter.fluentd.fluent.io/enabled: "true"
spec:
filters:
- recordTransformer:
enableRuby: true
records:
- key: kubernetes_ns
value: ${record["kubernetes"]["namespace_name"]}
---
apiVersion: fluentd.fluent.io/v1alpha1
kind: ClusterOutput
metadata:
name: cluster-fluentd-output-es
labels:
output.fluentd.fluent.io/scope: "cluster"
output.fluentd.fluent.io/enabled: "true"
spec:
outputs:
- elasticsearch:
host: elasticsearch.elasticsearch.svc
port: 9200
logstashFormat: true
logstashPrefix: fluent-log-cluster-fd
buffer:
type: file
path: /buffers/es_buffer #要对/var/log/fluentd-buffer目录增加权限,否则提示无权访问 chmod 777 /var/log/fluentd-buffer
EOF
kubectl apply -f /data/fluent-operator/fluentbit-fluentd/fluentd.yaml
4.3 fluentbit对nginx和json日志入库(还有问题没有解决)
apiVersion: fluentbit.fluent.io/v1alpha2
kind: FluentBit
metadata:
name: fluent-bit-nginx
namespace: fluent-operator
labels:
app.kubernetes.io/name: fluent-bit-nginx
spec:
image: kubesphere/fluent-bit:v1.9.3
positionDB:
hostPath:
path: /var/lib/fluent-bit-nginx/
resources:
requests:
cpu: 10m
memory: 25Mi
limits:
cpu: 500m
memory: 200Mi
fluentBitConfigName: fluent-bit-config-nginx
tolerations:
- operator: Exists
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFluentBitConfig
metadata:
name: fluent-bit-config-nginx
labels:
app.kubernetes.io/name: fluent-bit-nginx
spec:
service:
parsersFile: parsers.conf
inputSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "nginx"
filterSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "nginx"
outputSelector:
matchLabels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "nginx"
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterInput
metadata:
name: tail-nginx
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "nginx"
spec:
tail:
tag: nginx.*
path: /var/log/containers/*.log
parser: cri
refreshIntervalSeconds: 10
memBufLimit: 5MB
skipLongLines: true
db: /fluent-bit/tail/pos.db
dockerMode: false
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterFilter
metadata:
name: kubernetes-nginx
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "nginx"
spec:
match: nginx.*
filters:
- kubernetes:
kubeURL: https://kubernetes.default.svc:443
kubeCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubeTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
labels: false
annotations: false
tlsVerify: false
# keepLog: false
# mergeLogTrim: true
- parser:
keyName: message
parser: nginx,json
preserveKey: true #保留所有原field
reserveData: true #保留所有原field,包括处理parser前的field
- nest:
operation: lift
nestedUnder: kubernetes
addPrefix: kubernetes_
- modify:
rules:
- remove: stream
# - remove: kubernetes_pod_id
# - remove: kubernetes_host
# - remove: kubernetes_container_hash
- nest:
operation: nest
wildcard:
- kubernetes_*
nestUnder: kubernetes
removePrefix: kubernetes_
---
apiVersion: fluentbit.fluent.io/v1alpha2
kind: ClusterOutput
metadata:
name: es-nginx
labels:
fluentbit.fluent.io/enabled: "true"
fluentbit.fluent.io/mode: "nginx"
spec:
matchRegex: (?:kube|service|nginx).(.*)
es:
host: elasticsearch.elasticsearch.svc
port: 9200
generateID: true
logstashPrefix: fluent-log-fb-containerd-nginx
logstashFormat: true
timeKey: "@timestamp"
五、排错
- 5.1 获取fluent- bit.conf配置
#安装jq
rpm -ivh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install jq -y
#获取fluent-bit.conf配置
kubectl get secrets -n fluent-operator fluent-bit-config -o json | jq '.data."fluent-bit.conf"' | xargs echo | base64 --decode
[Service]
Parsers_File parsers.conf
[Input]
Name systemd
Path /var/log/journal
DB /fluent-bit/tail/systemd.db
Tag service.kubelet
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Systemd_Filter _SYSTEMD_UNIT=containerd.service
[Filter]
Name lua
Match service.*
script /fluent-bit/config/systemd.lua
call add_time
time_as_table true
[Output]
Name forward
Match_Regex (?:kube|service)\.(.*)
Host fluentd.fluent-operator.svc
Port 24224
- 5.1 获取fluentd.conf配置
#获取fluentd.conf配置
kubectl get secrets -n fluent-operator fluentd-config -o json | jq '.data."app.conf"' | xargs echo | base64 --decode
<source>
@type forward
bind 0.0.0.0
port 24224
</source>
<match **>
@id main
@type label_router
<route>
@label @cc1e154ba6a75c2de510ede5385b61da
<match>
namespaces default,kube-system
</match>
</route>
</match>
<label @cc1e154ba6a75c2de510ede5385b61da>
<match **>
@id ClusterFluentdConfig-cluster-cluster-fluentd-config::cluster::clusteroutput::cluster-fluentd-output-es-0
@type elasticsearch
host elasticsearch.elasticsearch.svc
logstash_format true
logstash_prefix fluent-log-cluster-fd
port 9200
</match>
</label>