1、添加部署的repo

[ec2-user@ip-172-31-32-32 ~]$ # add prometheus Helm repo

[ec2-user@ip-172-31-32-32 ~]$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

"prometheus-community" has been added to your repositories

[ec2-user@ip-172-31-32-32 ~]$

[ec2-user@ip-172-31-32-32 ~]$ # add grafana Helm repo

[ec2-user@ip-172-31-32-32 ~]$ helm repo add grafana https://grafana.github.io/helm-charts

"grafana" has been added to your repositories

[ec2-user@ip-172-31-32-32 ~]$

2、部署prometheus

[ec2-user@ip-172-31-32-32 ~]$ kubectl create namespace prometheus

geClass="gp2" \

--set server.persistentVolume.storageClass="gp2"

namespace/prometheus created

[ec2-user@ip-172-31-32-32 ~]$

[ec2-user@ip-172-31-32-32 ~]$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

"prometheus-community" already exists with the same configuration, skipping

[ec2-user@ip-172-31-32-32 ~]$

[ec2-user@ip-172-31-32-32 ~]$ helm install prometheus prometheus-community/prometheus \

> --namespace prometheus \

> --set alertmanager.persistentVolume.storageClass="gp2" \

> --set server.persistentVolume.storageClass="gp2"

NAME: prometheus

LAST DEPLOYED: Thu Jun 16 02:48:29 2022

NAMESPACE: prometheus

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:

prometheus-server.prometheus.svc.cluster.local

Get the Prometheus server URL by running these commands in the same shell:

export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")

kubectl --namespace prometheus port-forward $POD_NAME 9090

The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:

prometheus-alertmanager.prometheus.svc.cluster.local

Get the Alertmanager URL by running these commands in the same shell:

export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")

kubectl --namespace prometheus port-forward $POD_NAME 9093

#################################################################################

###### WARNING: Pod Security Policy has been moved to a global property. #####

###### use .Values.podSecurityPolicy.enabled with pod-based #####

###### annotations #####

###### (e.g. .Values.nodeExporter.podSecurityPolicy.annotations) #####

#################################################################################

The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:

prometheus-pushgateway.prometheus.svc.cluster.local

Get the PushGateway URL by running these commands in the same shell:

export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")

kubectl --namespace prometheus port-forward $POD_NAME 9091

For more information on running Prometheus, visit:

https://prometheus.io/

[ec2-user@ip-172-31-32-32 ~]$

查看prometheus地址

[ec2-user@ip-172-31-32-32 ~]$ export POD_NAME=$(kubectl get pods --namespace prometheus -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")

[ec2-user@ip-172-31-32-32 ~]$ kubectl --namespace prometheus port-forward $POD_NAME 9090

Forwarding from 127.0.0.1:9090 -> 9090

Forwarding from [::1]:9090 -> 9090

查看pod的地址

[ec2-user@ip-172-31-32-32 ~]$ kubectl get pods -n prometheus -o wide |grep prometheus

prometheus-alertmanager-6764b6b758-7g8zd 2/2 Running 0 16m 172.31.100.245 ip-172-31-100-152.ap-southeast-1.compute.internal <none> <none>

prometheus-kube-state-metrics-7c6ffc7686-kx5sc 1/1 Running 0 16m 172.31.100.210 ip-172-31-100-152.ap-southeast-1.compute.internal <none> <none>

prometheus-node-exporter-fc5rn 1/1 Running 0 16m 172.31.100.152 ip-172-31-100-152.ap-southeast-1.compute.internal <none> <none>

prometheus-node-exporter-s5vgl 1/1 Running 0 16m 172.31.200.202 ip-172-31-200-202.ap-southeast-1.compute.internal <none> <none>

prometheus-pushgateway-6bdd5f56cb-jtdqk 1/1 Running 0 16m 172.31.200.245 ip-172-31-200-202.ap-southeast-1.compute.internal <none> <none>

prometheus-server-77f6df8859-76lq8 2/2 Running 0 16m 172.31.100.226 ip-172-31-100-152.ap-southeast-1.compute.internal <none> <none>

[ec2-user@ip-172-31-32-32 ~]$

通过私有地址去查找EIP地址

[ec2-user@ip-172-31-32-32 ~]$ kubectl describe service kubernetes

Name: kubernetes

Namespace: default

Labels: component=apiserver

provider=kubernetes

Annotations: <none>

Selector: <none>

Type: ClusterIP

IP: 10.100.0.1

Port: https 443/TCP

TargetPort: 443/TCP

Endpoints: 172.31.100.60:443,172.31.200.23:443

Session Affinity: None

Events: <none>

[ec2-user@ip-172-31-32-32 ~]$

3、安装部署grafana

[ec2-user@ip-172-31-32-32 ~]$ sudo cat << EoF > ${HOME}/environment/grafana/grafana.yaml

> datasources:

> datasources.yaml:

> apiVersion: 1

> datasources:

> - name: Prometheus

> type: prometheus

> url: http://prometheus-server.prometheus.svc.cluster.local

> access: proxy

> isDefault: true

> EoF

-bash: /home/ec2-user/environment/grafana/grafana.yaml: Permission denied

[ec2-user@ip-172-31-32-32 ~]$

[ec2-user@ip-172-31-32-32 ~]$ cd environment/grafana/

[ec2-user@ip-172-31-32-32 grafana]$ vi grafana.yaml

[ec2-user@ip-172-31-32-32 grafana]$ ls

[ec2-user@ip-172-31-32-32 grafana]$ sudo vi grafana.yaml

[ec2-user@ip-172-31-32-32 grafana]$

[ec2-user@ip-172-31-32-32 grafana]$ ls -l

total 4

-rw-r--r-- 1 root root 221 Jun 16 03:34 grafana.yaml

[ec2-user@ip-172-31-32-32 grafana]$ kubectl create namespace grafana

namespace/grafana created

[ec2-user@ip-172-31-32-32 grafana]$

[ec2-user@ip-172-31-32-32 grafana]$ helm install grafana grafana/grafana \

> --namespace grafana \

> --set persistence.storageClassName="gp2" \

> --set persistence.enabled=true \

> --set adminPassword='EKS!sAWSome' \

> --values ${HOME}/environment/grafana/grafana.yaml \

> --set service.type=LoadBalancer

NAME: grafana

LAST DEPLOYED: Thu Jun 16 03:35:40 2022

NAMESPACE: grafana

STATUS: deployed

REVISION: 1

NOTES:

1. Get your 'admin' user password by running:

kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:

grafana.grafana.svc.cluster.local

Get the Grafana URL to visit by running these commands in the same shell:

NOTE: It may take a few minutes for the LoadBalancer IP to be available.

You can watch the status of by running 'kubectl get svc --namespace grafana -w grafana'

export SERVICE_IP=$(kubectl get svc --namespace grafana grafana -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

http://$SERVICE_IP:80

3. Login with the password from step 1 and the username: admin

[ec2-user@ip-172-31-32-32 grafana]$

[ec2-user@ip-172-31-32-32 grafana]$ kubectl get all -n grafana

NAME READY STATUS RESTARTS AGE

pod/grafana-f95fc5d67-xqt4p 0/1 Pending 0 85s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/grafana LoadBalancer 10.100.15.38 af934472ec7504b7fabfb0beddb23727-1347505905.ap-southeast-1.elb.amazonaws.com 80:30806/TCP 85s

NAME READY UP-TO-DATE AVAILABLE AGE

deployment.apps/grafana 0/1 1 0 85s

NAME DESIRED CURRENT READY AGE

replicaset.apps/grafana-f95fc5d67 1 1 0 85s

[ec2-user@ip-172-31-32-32 grafana]$ export ELB=$(kubectl get svc -n grafana grafana -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

[ec2-user@ip-172-31-32-32 grafana]$

[ec2-user@ip-172-31-32-32 grafana]$ echo "http://$ELB"

http://af934472ec7504b7fabfb0beddb23727-1347505905.ap-southeast-1.elb.amazonaws.com

[ec2-user@ip-172-31-32-32 grafana]$

访问grafana报错,发现是资源不足了,如下日志:

查看命令kubectl describe pod grafana-f95fc5d67-xqt4p -n grafana

在AWS上部署Promethus和Grafana_DNS

删除实验二和实验一的资源pod

在AWS上部署Promethus和Grafana_prometheus、Grafana_02

删除完pod又自定部署了pod,pod的name已经发生改变了。用删除部署的方式删除pod

在AWS上部署Promethus和Grafana_DNS_03

强制删除pod

[ec2-user@ip-172-31-32-32 grafana]$ kubectl delete pod cloudwatch-agent-2z54g -n amazon-cloudwatch --force --grace-period=0

warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.

pod "cloudwatch-agent-2z54g" force deleted

[ec2-user@ip-172-31-32-32 grafana]$

查看所有的deployment

在AWS上部署Promethus和Grafana_prometheus、Grafana_04


查看grafana密码,默认用户为admin

[ec2-user@ip-172-31-32-32 grafana]$ kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

EKS!sAWSome

[ec2-user@ip-172-31-32-32 grafana]$


[ec2-user@ip-172-31-32-32 grafana]$ kubectl delete deployment wordpress-cwi

Error from server (NotFound): deployments.apps "wordpress-cwi" not found

[ec2-user@ip-172-31-32-32 grafana]$ kubectl delete deployment understood-zebu-wordpress

Error from server (NotFound): deployments.apps "understood-zebu-wordpress" not found

[ec2-user@ip-172-31-32-32 grafana]$

[ec2-user@ip-172-31-32-32 grafana]$

[ec2-user@ip-172-31-32-32 grafana]$

在AWS上部署Promethus和Grafana_DNS_05