主机信息

hostname IP 服务
Server1 172.16.1.16 Docker/consul
Server2 172.16.1.17 Docker

实验目的

server1:运行2个容器bbox1和bbox2
server2:运行2个容器bbox3和bbox4
net1(overlay网络):bbox1和bbox4使用通信
net2(overlay网络):bbox2和bbox3使用通信,并手动指定网段为10.10.10.0,且bbox2使用10.10.10.100/24地址,bbox3使用10.10.10.100/24地址
同时bbox1也可与net2网络的容器通信

部署overlay环境

Pull consul镜像

1
[root@server1 ~]# docker pull progrium/consul

放行防护墙

Server1和Server2

1
2
3
4
5
6
7
8
9
10
11
firewall-cmd --add-port=2733/tcp --permanent
firewall-cmd --add-port=2733/udp --permanent
firewall-cmd --add-port=2376/udp --permanent
firewall-cmd --add-port=2376/tcp --permanent
firewall-cmd --add-port=7946/tcp --permanent
firewall-cmd --add-port=7946/udp --permanent
firewall-cmd --add-port=4789/udp --permanent
firewall-cmd --add-port=4789/tcp --permanent
firewall-cmd --reload
firewall-cmd --list-port
2733/tcp 2733/udp 2376/udp 2376/tcp 7946/tcp 7946/udp 4789/udp 4789/tcp

运行consul

1
2
3
[root@server1 ~]# docker run -d --restart always -p 8400:8400 -p 8500:8500 \
-p 8600:53/udp progrium/consul -server -bootstrap -ui-dir /ui
4fe0b24e05e3476d528e2881331128b7c69d185712bf4b69f3bb3e42a3f2f944
1
2
3
[root@server1 ~]# vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376 --containerd=/run/containerd/containerd.sock \
--cluster-store=consul://192.168.1.12:8500 --cluster-advertise=ens33:2376

–cluster-store 指定 consul 的地址。
–cluster-advertise 告知 consul 自己的连接地址。

1
[root@server1 ~]# scp /usr/lib/systemd/system/docker.service root@172.16.1.17:/usr/lib/systemd/system/
1
2
[root@server1 ~]# systemctl daemon-reload 
[root@server1 ~]# systemctl restart docker
1
2
[root@server2 ~]# systemctl daemon-reload 
[root@server2 ~]# systemctl restart docker

打开浏览器访问

http://172.16.1.16:8500

upload successful

创建overlay网络

根据需求创建两个overlay网络
一个net1,一个net2,
net2指定网段10.10.10.0/24

1
2
3
4
5
6
7
8
# 开启网卡混杂模式
[root@server1 ~]# ifconfig ens33 promisc
# 创建net1的overlay网络
[root@server1 ~]# docker network create --driver overlay --attachable net1
fffcacc4f9fd648235775abdb936f86139badbf6545f3e88b7e3f9f2222d5c35
# 创建net2的overlay网络
[root@server1 ~]# docker network create --driver overlay --attachable --subnet 10.10.10.0/24 --gateway 10.10.10.1 net2
fd4f5a8769f2ffce0b08ea34934dbec371fa253ed7161cf591d4c9f6463f8373
1
2
3
4
5
6
7
[root@server1 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
f7486dea4d5d bridge bridge local
dc8bfdbda464 host host local
fffcacc4f9fd net1 overlay global
fd4f5a8769f2 net2 overlay global
ecbab8a758e6 none null local

查看net2网络的网段

1
2
3
[root@server1 ~]# docker network inspect net2
"Subnet": "10.10.10.0/24",
"Gateway": "10.10.10.1"

Server1运行容器

运行bbox1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@server1 ~]# docker run -itd --name bbox1 --network net1 busybox
cef7e37d81c927d6d7f47e7c8efb03cfa158642b87f9c8da85f06410eb9fc976
[root@server1 ~]# docker exec bbox1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
15: eth1@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever

第一次使用overlay网络会生成一个docker_gwbridge网卡,用来为使用overlay网络的容器连接外网

1
2
3
4
5
6
7
8
[root@server1 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
f7486dea4d5d bridge bridge local
66ac54f1fc4c docker_gwbridge bridge local
dc8bfdbda464 host host local
fffcacc4f9fd net1 overlay global
fd4f5a8769f2 net2 overlay global
ecbab8a758e6 none null local

运行bbox2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@server1 ~]# docker run -itd --name bbox2 --network net2 --ip 10.10.10.100 busybox
d7f85cb5fd17d68534b92844f8e4b25caa916bf48fe8295980df1e0ed10d13ce

[root@server1 ~]# docker exec -it bbox2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:0a:0a:64 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.100/24 brd 10.10.10.255 scope global eth0
valid_lft forever preferred_lft forever
20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever

Server2运行容器

运行bbox3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@server2 ~]# docker run -itd --name bbox3 --network net2 --ip 10.10.10.10 busybox
0e7b3c89a1fb289e7abc88060e16a76feb54381525b57c16de770ec40340b0f0

[root@server2 ~]# docker exec -it bbox3 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:0a:0a:0a brd ff:ff:ff:ff:ff:ff
inet 10.10.10.10/24 brd 10.10.10.255 scope global eth0
valid_lft forever preferred_lft forever
11: eth1@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
failed to resize tty, using default size

运行bbox4

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@server2 ~]# docker run -itd --name bbox4 --network net1 busybox
686c5cc2acd2a61db709bc55be862b647abd82e8e5cd142bfff5e02f90928b8b

[root@server2 ~]# docker exec -it bbox4 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
16: eth1@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever

验证容器间通信

1
2
3
4
5
6
7
8
[root@server1 ~]# docker exec bbox1 ping -c 2 bbox4
PING bbox4 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.519 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=2.098 ms

--- bbox4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.519/1.308/2.098 ms
1
2
3
4
5
6
7
8
[root@server1 ~]# docker exec bbox2 ping -c 2 bbox3
PING bbox3 (10.10.10.10): 56 data bytes
64 bytes from 10.10.10.10: seq=0 ttl=64 time=0.695 ms
64 bytes from 10.10.10.10: seq=1 ttl=64 time=2.318 ms

--- bbox3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.695/1.506/2.318 ms

实验需求中,需要bbox1可以与net2网络中的bbox2和bbox3通信

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@server1 ~]# docker network connect net2 bbox1

[root@server1 ~]# docker exec bbox1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:00:00:02 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.2/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
15: eth1@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
22: eth2@if23: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:0a:0a:0a:02 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.2/24 brd 10.10.10.255 scope global eth2
valid_lft forever preferred_lft forever

bbox1 ping bbox2/bbox3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@server1 ~]# docker exec -it bbox1 ping -c 2 bbox2
PING bbox2 (10.10.10.100): 56 data bytes
64 bytes from 10.10.10.100: seq=0 ttl=64 time=0.146 ms
64 bytes from 10.10.10.100: seq=1 ttl=64 time=0.081 ms

--- bbox2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.081/0.113/0.146 ms
[root@server1 ~]# docker exec -it bbox1 ping -c 2 bbox3
PING bbox3 (10.10.10.10): 56 data bytes
64 bytes from 10.10.10.10: seq=0 ttl=64 time=3.447 ms
64 bytes from 10.10.10.10: seq=1 ttl=64 time=1.899 ms

--- bbox3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.899/2.673/3.447 ms

bbox4 Ping不通 bbox2/bbox3为正常现象

查看不同overlay的网络命名空间

Server1

1
2
3
4
5
6
7
[root@server1 ~]# ln -s /var/run/docker/netns/ /var/run/netns
[root@server1 ~]# ip netns
6b6eda831126 (id: 4)
1-fd4f5a8769 (id: 3)
a84b830b145d (id: 2)
1-fffcacc4f9 (id: 1)
c311683c948b (id: 0)

Server2

1
2
3
4
5
6
[root@server2 ~]# ln -s /var/run/docker/netns/ /var/run/netns
[root@server2 ~]# ip netns
78e0b21c4f14 (id: 3)
1-fffcacc4f9 (id: 2)
ea7d77b95e76 (id: 1)
1-fd4f5a8769 (id: 0)

观察两台主机的相同编号的命名空间

相同overlay能够通信就是因为各自在各自的网络命名空间中,而bbox1能够与net2中的容器通信,也是因为加入了net1和net2网络的命名空间