개발강의정리/DevOps

[데브옵스를 위한 쿠버네티스 마스터] 클러스터 유지와 보안, 트러블슈팅 - 백업과 복원 방법

nineDeveloper 2021. 1. 20.
728x90

백업과 복원 방법


백업 리소스의 종류

  • 포드의 정보 파일 YAML

    • $ kubectl get all --all-namspaces –o yaml > all-deploy-services.yaml
    • $ kubectl create -f all-deploy-services.yaml

    YAML 에 저장된 데이터는 위의 명령어로 확인 후 백업

  • Etcd 데이터베이스

    configMap, secret, pvc 와 같은 데이터들은 etcd 에 저장된 데이터를 확인 후 백업
    데이터 양이 그렇게 많지 않음

    ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb
    # exit 0
    
    # verify the snapshot
    ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb
    +----------+----------+------------+------------+
    |   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
    +----------+----------+------------+------------+
    | fe01cf57 |       10 |          7 | 2.1 MB     |
    +----------+----------+------------+------------+
  • Persistent Volume: 일반적인 방법으로 백업

    • Docker Repository, 프로그램이나 정책 같은 것들
  • Etcd 백업 파일 복원 명령

    • 백업해놓은 snapshotdb 를 변경사항에 맞게 복원
sudo ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 \
    --cacert /etc/kubernetes/pki/etcd/ca.crt \
    --cert /etc/kubernetes/pki/etcd/server.crt \
    --key /etc/kubernetes/pki/etcd/server.key \
    --data-dir /var/lib/etcd-restore \
    --initial-cluster='master=https://127.0.0.1:2380' \
    --name=master \
    --initial-cluster-token this-is-token \
    --initial-advertise-peer-urls https://127.0.0.1:2380 \
    snapshot restore ~/yaml/snapshotdb

옵션에 대한 상세 정보: https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/configuration.md

  • etcd.yaml 스태틱포드 수정
    • sudo vim/etc/kubernetes/manifests/etcd.yaml
      • 다음 디렉토리를 모두 찾아 디렉토리를 변경
        • /var/lib/etcd-->/var/lib/etcd-restore
      • 옵션 추가
        • --initial-cluster-token=this-is-token
  • Etcd가 부팅되고 1분 정도 기다린 후 kubectl 동작 확인
    • 부팅이 잘안되고 문제가 있을 경우 docker logs 를 확인 해봐야함

실습

모든 배포된 서비스 내용 백업

$ kubectl get all --all-namespaces -o yaml > all-deploy-services.yaml

etcd 를 사용하기 위해 wget 다운로드

wget https://github.com/etcd-io/etcd/releases/download/v3.4.14/etcd-v3.4.14-linux-amd64.tar.gz

etcd 압축해제 및 디렉토리 이동

tar -xzf etcd-v3.4.14-linux-amd64.tar.gz

etcdctl 명령어를 사용하기 위해 디렉토리 이동

cd etcd-v3.4.14-linux-amd64

etcd snapshotdb 백업 처리

$ sudo ETCDCTL_API=3 ./etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot save snapshotdb
{"level":"info","ts":1611047231.6908872,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"snapshotdb.part"}
{"level":"info","ts":"2021-01-19T18:07:11.697+0900","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1611047231.6976821,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"127.0.0.1:2379"}
{"level":"info","ts":"2021-01-19T18:07:11.751+0900","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1611047231.7691748,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"127.0.0.1:2379","size":"4.1 MB","took":0.078170255}
{"level":"info","ts":1611047231.7693815,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"snapshotdb"}
Snapshot saved at snapshotdb

status 명령으로 etcd 백업한 데이터 상태 정보 보기

$ sudo ETCDCTL_API=3 ./etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot status snapshotdb
52e715b, 397499, 1088, 4.1 MB

--write-out=table 옵션을 추가해 etcd 백업한 데이터 상태 정보 테이블 모양으로 보기

$ sudo ETCDCTL_API=3 ./etcdctl --endpoints 127.0.0.1:2379 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot status snapshotdb --write-out=table
+---------+----------+------------+------------+
|  HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+---------+----------+------------+------------+
| 52e715b |   397499 |       1088 |     4.1 MB |
+---------+----------+------------+------------+

snapshotdb 파일의 권한을 server1 에게 부여

$ sudo chown server1 snapshotdb

백업받은 all-deploy-services.yaml 파일과 snapshotdb 파일을 복사

설치된 etcd 디렉토리 내부에서 snapshotdb 복원명령 실행
별도의 etcd-restore 디렉토리를 만들어서 저장

sudo ETCDCTL_API=3 ./etcdctl --endpoints=127.0.0.1:2379 \
    --cacert /etc/kubernetes/pki/etcd/ca.crt \
    --cert /etc/kubernetes/pki/etcd/server.crt \
    --key /etc/kubernetes/pki/etcd/server.key \
    --data-dir /var/lib/etcd-restore \
    --initial-cluster='master=https://127.0.0.1:2380' \
    --name=master \
    --initial-cluster-token this-is-token \
    --initial-advertise-peer-urls https://127.0.0.1:2380 \
    snapshot restore ~/yaml/snapshotdb

정상 처리 결과

{"level":"info","ts":1611048259.7157,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"./snapshotdb","wal-dir":"/var/lib/etcd-restore/member/wal","data-dir":"/var/lib/etcd-restore","snap-dir":"/var/lib/etcd-restore/member/snap"}
{"level":"info","ts":1611048259.760945,"caller":"mvcc/kvstore.go:380","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":396828}
{"level":"info","ts":1611048259.7682946,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"63018d27b801e4a4","local-member-id":"0","added-peer-id":"90cab5eb03ee2ff3","added-peer-peer-urls":["https://127.0.0.1:2380"]}
{"level":"info","ts":1611048259.7744246,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"./snapshotdb","wal-dir":"/var/lib/etcd-restore/member/wal","data-dir":"/var/lib/etcd-restore","snap-dir":"/var/lib/etcd-restore/member/snap"}

/var/lig/etcd-restore 에 snapshotdb 가 잘 복원된 것을 확인

$ sudo -i
$ cd /var/lib/etcd-restore/
$ ls
member

static pod 를 수정

$ vi /etc/kubernetes/manifests/etcd.yaml
..
spec:
  containers:
  - command:
    ...
    - --data-dir=/var/lib/etcd-restore # etcd -> etcd-restore 변경
    - --initial-cluster-token=this-is-token # token 옵션 추가
    ...
    volumeMounts:
    - mountPath: /var/lib/etcd-restore # etcd -> etcd-restore 변경
  ...
  volumes:
  ...
  - hostPath:
      path: /var/lib/etcd-restore # etcd -> etcd-restore 변경
  ...

api server 가 etcd 저장소를 잃어버려서 통신이 안되는 것을 확인함

$ kubectl get pod

etcd 가 정리되는 시간이 1분이상 필요함

$ sudo docker ps -a | grep etcd
108c9492b1f1   k8s.gcr.io/pause:3.2   "/pause"                 9 hours ago         Up 9 hours                         k8s_POD_etcd-master_kube-system_527dd0bdddb333977f0320f4dd615fe8_21

deployment 는 이미 업데이트 되었음

$ kubectl get pod

서비스도 복원이 잘된것을 확인

$ kubectl get all -n kubernetes-dashboard
NAME                                            READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-894c58c65-425pd   1/1     Running   0          98s
pod/kubernetes-dashboard-775dfc9478-gbmw8       1/1     Running   0          98s

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   10.36.6.235    <none>        8000/TCP   98s
service/kubernetes-dashboard        ClusterIP   10.36.13.174   <none>        443/TCP    102s

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           99s
deployment.apps/kubernetes-dashboard        1/1     1            1           100s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/dashboard-metrics-scraper-894c58c65   1         1         1       99s
replicaset.apps/kubernetes-dashboard-775dfc9478       1         1         1       100s

마지막으로 YAML 파일의 모든 서비스를 복원
대부분 이미 있는 서비스라는 문구가 출력되지만 잘 복원이 되는 것을 확인할 수 있음

$ kubectl create -f ~/yaml/all-deploy-services.yaml

secret 토큰명 확인

$ kubectl get secret -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
default-token-t6xr4                kubernetes.io/service-account-token   3      14m
kubernetes-dashboard-certs         Opaque                                0      14m
kubernetes-dashboard-csrf          Opaque                                1      14m
kubernetes-dashboard-key-holder    Opaque                                2      14m
kubernetes-dashboard-token-9rmv4   kubernetes.io/service-account-token   3      14m

kubernetes-dashboard-token 확인

$ kubectl describe secret -n kubernetes-dashboard kubernetes-dashboard-token-9rmv4
Name:         kubernetes-dashboard-token-9rmv4
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: b77c3ca7-26c4-41c6-a99a-63eb62a94c41
Type:  kubernetes.io/service-account-token
Data
====
ca.crt:     1159 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlRRenlNMnU0YlQyMVJRUFNWMHhuTUt5T3JuZEk3d3Y0SjdiTm1KMHQtTEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2V
ydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi05cm12NCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI3N2MzY
2E3LTI2YzQtNDFjNi1hOTlhLTYzZWI2MmE5NGM0MSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.nHZATstFsCn69kyFlctRBg1Aa3nICWrc_JdrgV5-JifmmWIcJuaGsMCeE6cnLrLunwhS26aXCA35KjlEkGmD89rXa06OCwzWy9ydrf-jSRKG37OvUa4zT6
XQJVGDT1e-qB9oYF-YzKfxJXOfOKGNU1vqEXVIQJwNBPa9jh4NVdorfcTYBtaLyn7ktZvDX2tH9tjmNwBpg00Wd8v9kTa0zJiLGvtJE2uyo-Ct9WkKTn1ikCzxLqX5Lz2V-4C-37r8UYpvL_MpHHkEAlcG4nPQDJ8TVK7XYn1xLH6_or5CVZPJxq_Bb3Q9CF5oJrAjQzQrJPF1bKdvVoLOS8PwvwhK6g

127.0.0.1:31487 접속

토큰을 체크하고 출력된 토큰을 복사해서 붙여 넣고 로그인

728x90

댓글

💲 추천 글