반응형
kubernetes Controller Plane 부트스트래핑
● 두 컨트롤러 노드에서 전부 실행
○ Control Plane 프로비저닝
sudo mkdir -p /etc/kubernetes/config
○ 쿠버네티스 컨트롤러 바이너리 다운로드
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
○ 다운받은 쿠버네티스 컨트롤러 바이너리 설치
{
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}
○ 쿠버네티스 API Server 구성하기
{
sudo mkdir -p /var/lib/kubernetes/
sudo cp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
○ 각각 노드의 내부 주소 변수 선언 ( 컨트롤러1은 1의 주소 2는 2의 주소)
INTERNAL_IP=(컨트롤러 1은 1의 주소 2는 2의 주소 )
○ 각각 systemd 파일에 작성 할 컨트롤러 1과 2의 주소 변수선언
CONTROLLER0_IP=192.168.47.128
CONTROLLER1_IP=192.168.47.129
○ kube-apiserver.service systemd 파일 생성
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=2 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://$CONTROLLER0_IP:2379,https://$CONTROLLER1_IP:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
○ 쿠버네티스 Controller Manager 구성하기
sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/
○ kube-controller-manager.service systemd 파일 생성
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
○ 쿠버네티스 Scheduler 구성하기
sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/
○ kube-scheduler.yaml yaml파일 생성
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
○ kube-scheduler.service systemd 파일 생성
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
○ 컨트롤러 노드 서비스 시작
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
○ 결과 확인
sudo systemctl status kube-apiserver kube-controller-manager kube-scheduler
○ 컴포넌트 헬스체크
kubectl get componentstatuses --kubeconfig admin.kubeconfig
이렇게 나오면 정상
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
○ http 상태 검사를 처리 할 기본 웹서버 설치
sudo apt-get install -y nginx
cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location /healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
{
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
}
sudo systemctl restart nginx
sudo systemctl enable nginx
○ nginx HTTP 헬스 체크 프록시 테스트
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
이렇게 나오면 정상
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Sun, 30 Sep 2018 17:44:24 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
ok
Kubelet 인증을 위한 RBAC
• Kubelet 인증을 위한 RBAC (컨트롤러 1 이나 2 한 군데에서만 실행 )
○ Kubelet API에 접근 권한이 있는 system:kube-apiserver-to-kubelet ClusterRole을 만듭니다.
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
○ system:kube-apiserver-to-kubelet ClusterRole을 kubernetes 사용자에게 바인드
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
로드 밸런서 작업
● LB에서 작업
○ nginx 설치
{
sudo apt-get install -y nginx
sudo systemctl enable nginx
}
○ nginx.conf 편집
sudo mkdir -p /etc/nginx/tcpconf.d
sudo vi /etc/nginx/nginx.conf
마지막 줄에 추가
include /etc/nginx/tcpconf.d/*;
○ 구성 파일을 만들어 APT 로드 밸런싱 구성
CONTROLLER0_INTERNAL_IP=192.168.47.128 #(컨트롤러1)
CONTROLLER1_INTERNAL_IP=192.168.47.129 #(컨트롤러2)
cat << EOF | sudo tee /etc/nginx/tcpconf.d/kubernetes.conf
stream {
upstream kubernetes {
server ${CONTROLLER0_INTERNAL_IP}:6443;
server ${CONTROLLER1_INTERNAL_IP}:6443;
}
server {
listen 6443;
listen 443;
proxy_pass kubernetes;
}
}
EOF
○ nginx 설정 리로드
sudo nginx -s reload
○ 로드 밸런스 고정 ip
KUBERNETES_PUBLIC_ADDRESS=192.168.47.140
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
○ 출력
{
"major": "1",
"minor": "12",
"gitVersion": "v1.12.0",
"gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0",
"gitTreeState": "clean",
"buildDate": "2018-09-27T16:55:41Z",
"goVersion": "go1.10.4",
"compiler": "gc",
"platform": "linux/amd64"
}
이렇게 나오면 정상
반응형
'devops > kubernetes' 카테고리의 다른 글
kubernetes Hard Way 설치 <7> (0) | 2023.05.30 |
---|---|
kubernetes Hard Way 설치 <6> (0) | 2023.05.30 |
kubernetes Hard Way 설치 <4> (0) | 2023.05.30 |
kubernetes Hard Way 설치 <3> (0) | 2023.05.30 |
kubernetes Hard Way 설치 <2> (0) | 2023.05.30 |