Kubernetes v1.6.4 CentOS 7 编译安装

1. 简介

最近需要研究k8s(后面Kubernetes 均简写为k8s)QoS相关问题,为了更深入理解以及便于以后对k8s的二次开发打下基础,决定在CentOS7上编译安装最近的稳定版v1.6.4,在此总结下编译安装k8s中遇到的问题。


2. 准备工作

虚拟机创建

至少准备两台虚拟机或者物理机,本文使用了三台KVM虚拟机,OpenStack平台创建并启动

角色 IP地址 主机名 操作系统
Master 10.10.10.32/24 master CentOS 7
Node 10.10.10.33/24 node1 CentOS 7
Node 10.10.10.34/24 node2 CentOS 7

hosts文件修改

k8s编译过程中以及后续docker的使用,都需要pull谷歌容器库中的相关镜像,通过修改hosts的办法来访问谷歌

1
2
3
4
yum install -y git
git clone https://github.com/racaljk/hosts.git
mv /etc/hosts /etc/hosts.bak
cp hosts/hosts /etc

selinux关闭

关闭所有虚拟机的selinux,并重启

1
2
sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config
reboot

iptables安装设置

关闭所有虚拟机firewalld,安装iptables

1
2
3
4
5
6
7
systemctl stop firewalld
systemctl disable firewalld

yum install iptables-services
systemctl start iptables
systemctl enable iptables
iptables-save > /etc/sysconfig/iptables

NTP安装

安装NTP,所有节点时间保持一致

1
2
3
yum install -y ntp
systemctl start ntpd
systemctl enable ntpd

提示: 以上重复性的配置,可以启动一个虚拟机进行配置,然后执行快照,以快照启动新的虚拟机


3. Kubernetes编译

K8S源码编译

kubernetes的源码编译可以分为两种方式。一种是在宿主机/物理机上进行编译,这需要完整的搭建编译环境,这个会依赖于各种问题。另外一种则是使用docker进行编译。这也是目前最为流行的编译方式。本文使用编译的方式为后者

Docker安装

k8s版本更迭比较快,因此使用的组件也都是比较新的,CentOS7默认yum源Docker与k8s v1.6.4似乎无法完美兼容,因此需要安装最新稳定版本的Docker CE(社区版)

在所有Node节点上,安装最新稳定版本的Docker CE(社区版)[1]。关于Docker CE(社区版)与Docker EE(企业版)的区别,请自行搜索。

  • 卸载老版本的Docker(如何老版本Docker存在的话)

    1
    yum remove docker docker-common container-selinux docker-selinux docker-engine -y
  • 设置repository,并禁用edge版本, 关于stableedge

    1
    2
    3
    4
    5
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    yum-config-manager --disable docker-ce-edge
  • 更新yum cache并查看Docker-CE版本

    1
    2
    yum makecache fast
    yum list docker-ce.x86_64 --showduplicates | sort -r
  • 安装最新稳定版并启动服务

    1
    2
    3
    yum install docker-ce -y
    systemctl start docker
    systemctl enable docker
  • 测试Docker

    1
    docker run hello-world
  • 安装docker成功之后,会对iptables的进行一些修改,默认情况下,filter表FORWARD链的默认策略修改为DROP。修改默认策略为ACCEPT

    1
    2
    iptables -P FORWARD ACCEPT
    iptables-save >/etc/sysconfig/iptables

Golang安装

选择在某一个Node节点,编译k8s v1.6.4,因此需要在编译k8s源码的Node节点安装Golang

由于k8s更迭速度较快,因此CentOS7默认yum源Golang与k8s v1.6.4似乎无法兼容,因此安装最近版本的Golang[2]

  • 安装gcc、make

    1
    yum install gcc make -y
  • 从站点https://golang.org/dl/,下载最新编译好Golang二进制文件,

    1
    wget https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz
  • 解压到/usr/local目录

    1
    tar -C /usr/local -xvzf go1.8.1.linux-amd64.tar.gz
  • 添加环境变量

    1
    2
    echo "export PATH=$PATH:/usr/local/go/bin" >>/etc/profile
    source /etc/profile

源码下载

  • 从GitHub的k8s的release页面下载v1.6.4源码

    1
    wget https://github.com/kubernetes/kubernetes/archive/v1.6.4.tar.gz /root
  • 解压源码并编译,这里只编译Linux amd64架构

    1
    2
    tar xvzf v1.6.4.tar.gz
    cd kubernetes-1.6.4 && make
  • 编译完成后,二进制文件在/root/ kubernetes-1.6.4/_output/bin目录下

  • 实际过程中,不需要编译所有的文件,更多编译选项


4. Master节点配置

etcd安装配置

  • 在正式的生产环境,etcd节点需要单独安装,还可能安装多个etcd节点,这里为测试环境,将etcd节点安装在Master节点上。
  • 安装etcd,CentOS 7 的yum源etcd版本符合要求

    1
    yum install -y etcd
  • 修改etcd配置文件并启动服务

    1
    2
    3
    sed -i s/localhost:2379/0.0.0.0:2379/g /etc/etcd/etcd.conf
    systemctl enable etcd
    systemctl start etcd
  • 创建所接节点的Overlay网络以及设置地址段, 172.17.0.0/16为Overlay网络地址段,类型为VXLAN,每个节点的子网从Overlay网络中选取一段子网掩码为24的子网,更多k8s网络细节

    1
    2
    3
    4
    5
    6
    7
    8
    etcdctl mk /atomic.io/network/config '{
    "Network": "172.17.0.0/16",
    "SubnetLen": 24,
    "Backend": {
    "Type": "vxlan",
    "VNI": 7890
    }
    }'
  • 查看Overlay网络的配置

    1
    2
    3
    4
    5
    6
    7
    8
    9
    [root@master ~]# etcdctl get /atomic.io/network/config
    {
    "Network": "172.17.0.0/16",
    "SubnetLen": 24,
    "Backend": {
    "Type": "vxlan",
    "VNI": 7890
    }
    }

可执行文件安装

  • 在编译k8s的Node节点上,将/root/kubernetes-1.6.4/_output/bin目录下的

    • kube-apiserver
    • kube-controller-manager
    • kube-scheduler
    • kubectl

    复制到Master节点/usr/bin/目录下

    1
    2
    cd /root/kubernetes-1.6.4/_output/bin
    scp kube-apiserver kube-controller-manager kube-scheduler kubectl root@master:/usr/bin
  • 创建systemctl对应的service服务以及配置文件,根据自己的的配置修改MASTER_ADDRESSETCD_SERVERS参数

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    #!/bin/bash
    # Copyright 2016 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.

    MASTER_ADDRESS=${1:-"10.10.10.32"}
    ETCD_SERVERS=${2:-"http://10.10.10.32:2379"}
    SERVICE_CLUSTER_IP_RANGE=${3:-"10.254.0.0/16"}
    ADMISSION_CONTROL=${4:-"NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"}

    mkdir -p /etc/kubernetes

    cat <<EOF >/etc/kubernetes/config
    # --logtostderr=true: log to standard error instead of files
    KUBE_LOGTOSTDERR="--logtostderr=true"

    # --v=0: log level for V logs
    KUBE_LOG_LEVEL="--v=0"

    # --allow-privileged=false: If true, allow privileged containers.
    KUBE_ALLOW_PRIV="--allow-privileged=false"

    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
    EOF

    cat <<EOF >/etc/kubernetes/apiserver
    # --insecure-bind-address=127.0.0.1: The IP address on which to serve the --insecure-port.
    KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

    # --insecure-port=8080: The port on which to serve unsecured, unauthenticated access.
    KUBE_API_PORT="--insecure-port=8080"

    # --kubelet-port=10250: Kubelet port
    NODE_PORT="--kubelet-port=10250"

    # --etcd-servers=[]: List of etcd servers to watch (http://ip:port),
    # comma separated. Mutually exclusive with -etcd-config
    KUBE_ETCD_SERVERS="--etcd-servers=${ETCD_SERVERS}"

    # --advertise-address=<nil>: The IP address on which to advertise
    # the apiserver to members of the cluster.
    KUBE_ADVERTISE_ADDR="--advertise-address=${MASTER_ADDRESS}"

    # --service-cluster-ip-range=<nil>: A CIDR notation IP range from which to assign service cluster IPs.
    # This must not overlap with any IP ranges assigned to nodes for pods.
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE}"

    # --admission-control="AlwaysAdmit": Ordered list of plug-ins
    # to do admission control of resources into cluster.
    # Comma-delimited list of:
    # LimitRanger, AlwaysDeny, SecurityContextDeny, NamespaceExists,
    # NamespaceLifecycle, NamespaceAutoProvision,
    # AlwaysAdmit, ServiceAccount, ResourceQuota, DefaultStorageClass
    KUBE_ADMISSION_CONTROL="--admission-control=${ADMISSION_CONTROL}"

    # Add your own!
    KUBE_API_ARGS=""
    EOF

    KUBE_APISERVER_OPTS=" \${KUBE_LOGTOSTDERR} \\
    \${KUBE_LOG_LEVEL} \\
    \${KUBE_ETCD_SERVERS} \\
    \${KUBE_API_ADDRESS} \\
    \${KUBE_API_PORT} \\
    \${NODE_PORT} \\
    \${KUBE_ADVERTISE_ADDR} \\
    \${KUBE_ALLOW_PRIV} \\
    \${KUBE_SERVICE_ADDRESSES} \\
    \${KUBE_ADMISSION_CONTROL} \\
    \${KUBE_API_ARGS}"

    cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    After=etcd.service

    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/apiserver
    ExecStart=/usr/bin/kube-apiserver ${KUBE_APISERVER_OPTS}
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536

    [Install]
    WantedBy=multi-user.target
    EOF

    cat <<EOF >/etc/kubernetes/controller-manager
    ###
    # The following values are used to configure the kubernetes controller-manager

    # defaults from config and apiserver should be adequate

    # Add your own!
    KUBE_CONTROLLER_MANAGER_ARGS=""
    EOF

    KUBE_CONTROLLER_MANAGER_OPTS=" \${KUBE_LOGTOSTDERR} \\
    \${KUBE_LOG_LEVEL} \\
    \${KUBE_MASTER} \\
    \${KUBE_CONTROLLER_MANAGER_ARGS}"

    cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/controller-manager
    ExecStart=/usr/bin/kube-controller-manager ${KUBE_CONTROLLER_MANAGER_OPTS}
    Restart=on-failure
    LimitNOFILE=65536

    [Install]
    WantedBy=multi-user.target
    EOF

    cat <<EOF >/etc/kubernetes/scheduler
    ###
    # kubernetes scheduler config

    # Add your own!
    KUBE_SCHEDULER_ARGS=""
    EOF

    KUBE_SCHEDULER_OPTS=" \${KUBE_LOGTOSTDERR} \\
    \${KUBE_LOG_LEVEL} \\
    \${KUBE_MASTER} \\
    \${KUBE_SCHEDULER_ARGS}"

    cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/scheduler
    ExecStart=/usr/bin/kube-scheduler ${KUBE_SCHEDULER_OPTS}
    Restart=on-failure
    LimitNOFILE=65536

    [Install]
    WantedBy=multi-user.target
    EOF

    systemctl daemon-reload
  • 重启k8s Master节点服务

    1
    2
    3
    4
    5
    for svc in kube-apiserver kube-controller-manager kube-scheduler; do 
    systemctl restart $svc
    systemctl enable $svc
    systemctl status $svc
    done
  • 查看是否k8s相关服务启动成功

    1
    ps -A | grep kube

iptables设置

Master节点Node节点之间一些服务需要进行通信,因此添加iptables条目。其中8080为kubeAPIserver服务使用的端口,2379为etcd服务使用的端口。

1
2
3
iptables -I INPUT -s 10.10.10.0/24 -p tcp --dport 8080 -j ACCEPT
iptables -I INPUT -s 10.10.10.0/24 -p tcp --dport 2379 -j ACCEPT
iptables-save > /etc/sysconfig/iptables

hosts文件修改

添加所有节点的主机名解析

1
2
3
4
5
cat >>/etc/hosts <<EOF
10.10.10.32 master
10.10.10.33 node1
10.10.10.34 node2
EOF

5. Node节点配置

Flannel安装配置

  • flannel安装, CentOS7 yum源的flannel版本符合要去

    1
    yum install -y flannel
  • 修改配置配置文件/etc/sysconfig/flanneld,其中10.10.10.32为安装了etcd服务的Master节点的IP地址

    1
    2
    FLANNEL_ETCD="http://10.10.10.32:2379"
    FLANNEL_ETCD_KEY="/atomic.io/network"

    可以使用如下命令修改

    1
    sed -i s/127.0.0.1:2379/10.10.10.32:2379/g /etc/sysconfig/flanneld
  • 启动服务,此时flannel服务向Master节点etcd服务请求分配一个子IP地址范围

    1
    2
    3
    systemctl restart flanneld
    systemctl enable flanneld
    systemctl status flanneld

    Master节点查看Node节点flannel网络信息。

    1
    2
    3
    4
    5
    6
    7
    [root@master ~]# etcdctl ls /atomic.io/network/subnets
    /atomic.io/network/subnets/172.17.43.0-24
    /atomic.io/network/subnets/172.17.98.0-24
    [root@master ~]# etcdctl get /atomic.io/network/subnets/172.17.98.0-24
    {"PublicIP":"10.10.10.34"}
    [root@master ~]# etcdctl get /atomic.io/network/subnets/172.17.43.0-24
    {"PublicIP":"10.10.10.33"}

可执行文件安装

  • 在编译k8s的Node节点上,将kubernetes-1.6.4/_output/bin目录下的

    • kube-proxy
    • kubelet

    复制到所有Node节点/usr/bin/目录下

    1
    2
    3
    cd kubernetes-1.6.4/_output/bin
    scp kube-proxy kubelet root@node1:/usr/bin
    scp kube-proxy kubelet root@node2:/usr/bin
  • 创建systemctl对应的service文件以及配置文件,根据自己的的配置修改MASTER_ADDRESSNODE_HOSTNAME参数,其中NODE_HOSTNAME要与修改的hosts文件中的主机名解析一致!

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    #!/bin/bash
    # Copyright 2016 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.

    MASTER_ADDRESS=${1:-"10.10.10.32"}
    NODE_HOSTNAME=${2:-"node1"}

    mkdir -p /etc/kubernetes
    mkdir -p /var/lib/kubelet

    cat <<EOF >/etc/kubernetes/config
    # --logtostderr=true: log to standard error instead of files
    KUBE_LOGTOSTDERR="--logtostderr=true"

    # --v=0: log level for V logs
    KUBE_LOG_LEVEL="--v=0"

    # --allow-privileged=false: If true, allow privileged containers.
    KUBE_ALLOW_PRIV="--allow-privileged=false"

    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
    EOF

    cat <<EOF >/etc/kubernetes/proxy
    ###
    # kubernetes proxy config

    # default config should be adequate

    # Add your own!
    KUBE_PROXY_ARGS=""
    EOF

    KUBE_PROXY_OPTS=" \${KUBE_LOGTOSTDERR} \\
    \${KUBE_LOG_LEVEL} \\
    \${KUBE_MASTER} \\
    \${KUBE_PROXY_ARGS}"

    cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Proxy
    After=network.target

    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kube-proxy
    ExecStart=/usr/bin/kube-proxy ${KUBE_PROXY_OPTS}
    Restart=on-failure
    LimitNOFILE=65536

    [Install]
    WantedBy=multi-user.target
    EOF

    cat <<EOF >/etc/kubernetes/kubelet
    # --address=0.0.0.0: The IP address for the Kubelet to serve on (set to 0.0.0.0 for all interfaces)
    KUBELET__ADDRESS="--address=0.0.0.0"

    # --port=10250: The port for the Kubelet to serve on. Note that "kubectl logs" will not work if you set this flag.
    KUBELET_PORT="--port=10250"

    # --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname.
    KUBELET_HOSTNAME="--hostname-override=${NODE_HOSTNAME}"

    # --api-servers=[]: List of Kubernetes API servers for publishing events,
    # and reading pods and services. (ip:port), comma separated.
    KUBELET_API_SERVER="--api-servers=http://${MASTER_ADDRESS}:8080"

    # pod infrastructure container
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

    # Add your own!
    KUBELET_ARGS=""
    EOF

    KUBE_PROXY_OPTS=" \${KUBE_LOGTOSTDERR}\\
    \${KUBE_LOG_LEVEL}\\
    \${KUBELET__ADDRESS}\\
    \${KUBELET_PORT}\\
    \${KUBELET_HOSTNAME}\\
    \${KUBELET_API_SERVER}\\
    \${KUBE_ALLOW_PRIV}\\
    \${KUBELET_POD_INFRA_CONTAINER}\\
    \${KUBELET_ARGS}"

    cat <<EOF >/usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service

    [Service]
    WorkingDirectory=/var/lib/kubelet
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/usr/bin/kubelet\
    ${KUBE_PROXY_OPTS}
    Restart=on-failure

    [Install]
    WantedBy=multi-user.target
    EOF

    systemctl daemon-reload
  • 重启k8s Node节点服务

    1
    2
    3
    4
    5
    for svc in docker kubelet kube-proxy; do 
    systemctl restart $svc
    systemctl enable $svc
    systemctl status $svc
    done
  • 查看是否k8s相关服务启动成功

    1
    ps -A | grep kube

Docker启动参数修改

Flannel创建的Overlay网络,修改Docker的启动参数,为了让所有节点的网卡docker0都处于可以相互通信。

  • /lib/systemd/system目录下,有两个关于docker服务的文件

    1
    2
    3
    4
    5
    6
    [root@node3 system]# ll docker*
    -rw-r--r-- 1 root root 1016 Jun 6 01:27 docker.service

    docker.service.d:
    total 4
    -rw-r--r-- 1 root root 47 Mar 6 21:54 flannel.conf
  • docker.service.d/flannel.conf内容为flannel修改Docker启动参数的环境变量文件

    1
    2
    3
    4
    5
    6
    7
    8
    [root@node3 system]# cat docker.service.d/flannel.conf 
    [Service]
    EnvironmentFile=-/run/flannel/docker
    [root@node3 system]# cat /run/flannel/docker
    DOCKER_OPT_BIP="--bip=172.17.50.1/24"
    DOCKER_OPT_IPMASQ="--ip-masq=true"
    DOCKER_OPT_MTU="--mtu=1400"
    DOCKER_NETWORK_OPTIONS=" --bip=172.17.50.1/24 --ip-masq=true --mtu=1400"
  • 将变量DOCKER_NETWORK_OPTIONS加入到Docker的启动参数,并重新启动Docker

    1
    2
    3
    sed -i 's/ExecStart=\/usr\/bin\/dockerd/& $DOCKER_NETWORK_OPTIONS/' /lib/systemd/system/docker.service
    systemctl daemon-reload
    systemctl restart docker

iptables设置

Master节点Node节点之间一些服务需要进行通信,此外Node节点之间也Overlay网络VXLAN通过UDP相互访问,因此添加iptables条目。10250为kubelet服务使用的端口。

1
2
3
iptables -I INPUT -s 10.10.10.0/24 -p tcp --dport 10250 -j ACCEPT
iptables -I INPUT -s 10.10.10.0/24 -p udp -j ACCEPT
iptables-save > /etc/sysconfig/iptables

hosts文件修改

添加所有节点的主机名解析

1
2
3
4
5
cat >>/etc/hosts <<EOF
10.10.10.32 master
10.10.10.33 node1
10.10.10.34 node2
EOF

6. 配置验证

验证配置环境

在Master节点运行命令kubectl get no,输入出信息如下,环境创建成功

1
2
3
4
[root@master ~]# kubectl get no
NAME STATUS AGE VERSION
node1 NotReady 3d v1.6.4+d6f433224538d
node2 NotReady 3d v1.6.4+d6f433224538d

网络测试

  • Master节点可以查看所有Node节点获取的子网地址段

    1
    2
    3
    [root@master ~]# etcdctl ls /atomic.io/network/subnets
    /atomic.io/network/subnets/172.17.49.0-24
    /atomic.io/network/subnets/172.17.100.0-24
  • Node节点2上ping Node节点1的docker0地址

    1
    2
    3
    4
    5
    [root@node2 ~]# ping 172.17.100.1
    PING 172.17.100.1 (172.17.100.1) 56(84) bytes of data.
    64 bytes from 172.17.100.1: icmp_seq=1 ttl=64 time=1.09 ms
    64 bytes from 172.17.100.1: icmp_seq=2 ttl=64 time=0.367 ms
    64 bytes from 172.17.100.1: icmp_seq=3 ttl=64 time=0.315 ms
  • 在Node节点2上pingNode节点1的docker0地址

    1
    2
    3
    4
    5
    [root@node1 ~]# ping 172.17.49.1
    PING 172.17.49.1 (172.17.49.1) 56(84) bytes of data.
    64 bytes from 172.17.49.1: icmp_seq=1 ttl=64 time=1.12 ms
    64 bytes from 172.17.49.1: icmp_seq=2 ttl=64 time=0.395 ms
    64 bytes from 172.17.49.1: icmp_seq=3 ttl=64 time=0.325 ms
  • Flannel Overlay网络搭建成功


7. Dashboard配置

Dashboard也作为一个Pod的形式,运行在kube-system命名空间。在Master节点上,下载yaml文件

1
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

由于搭建的k8s集群之间没有相互进行验证,因此删除给定dashboard模版中的一些内容,修改后内容如下。注意一下几点:

  • 注释掉ServiceAccount以及ClusterRoleBinding类别
  • 修改APIServer地址,这里为--apiserver-host=http://10.10.10.44:8080
  • 注释掉Deployment类别中关于ServiceAccountName选项
  • 如果不部署到Master节点上,注释Deployment类别中关于tolerations的选项
  • 添加Service类别的Node端口,这里端口为32345
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1
ports:
- containerPort: 9090
protocol: TCP
args:
- --apiserver-host=http://10.10.10.44:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
nodePort: 32345
selector:
k8s-app: kubernetes-dashboard

然后在任意Node节点,使用http://[Node IP]:32345登录k8s集群的dashboard。


修订版本信息

修订版本 时间 备注
文档创建 2017/6/6 12:49 文件创建
修改1 2017/6/6 13:15 添加关于hosts以及iptables的修改
修改2 2017/6/17 11:40 iptables修改
修改3 2016/6/19 14:11 Dashboard添加

参考

  1. https://docs.docker.com/engine/installation/linux/centos/#os-requirements
  2. https://www.digitalocean.com/community/tutorials/how-to-install-go-1-7-on-centos-7
  3. http://www.cnblogs.com/yujinyu/p/6092572.html
  4. http://ju.outofmemory.cn/entry/249082
  5. http://dockone.io/article/1186