当前位置: 首页 > news >正文

企业运维容器之 docker 网络

目录

1. Docker原生网络

2. Docker自定义网络

3. Docker容器通信

4. 跨主机容器网络

1. Docker原生网络

docker的镜像是令人称道的地方,但网络功能还是相对薄弱的部分

docker安装后会自动创建3种网络:bridge、host、none.

可以使用以下命令查看:

docker network ls

[[root@node11 ~]# docker network ls

NETWORK ID     NAME      DRIVER    SCOPE

f257f4a6cb8f   bridge    bridge    local

95062a580d6e   host      host      local

b8ede4be7515   none      null      local

1).docker 安装时会创建一个名为 docker0 的 Linux bridge,新建的容器会自动桥接到这个接口。

  • bridge模式下容器没有一个公有ip,只有宿主机可以直接访问,外部主机是不可见的;容器通过宿主机的NAT规则后可以访问外网。
[root@node11 harbor]# docker-compose  start 启动容器
[root@node11 harbor]# docker-compose  stop
[root@node11 harbor]# docker-compose  down  关闭并且删掉容器
[+] Running 12/10
 ⠿ Container nginx                    Removed                                                        0.0s
 ⠿ Container chartmuseum              Removed                                                        0.0s
 ⠿ Container registryctl              Removed                                                        0.0s
 ⠿ Container harbor-jobservice        Removed                                                        0.0s
 ⠿ Container harbor-core              Removed                                                        0.0s
 ⠿ Container harbor-portal            Removed                                                        0.0s
 ⠿ Container redis                    Removed                                                        0.0s
 ⠿ Container harbor-db                Removed                                                        0.0s
 ⠿ Container registry                 Removed                                                        0.0s
 ⠿ Container harbor-log               Removed                                                       10.1s
 ⠿ Network harbor_harbor-chartmuseum  Removed                                                        0.1s
 ⠿ Network harbor_harbor              Removed                                                        0.0s
[root@node11 harbor]# docker-compose  up -d 创建并启动容器
[root@node11 ~]# docker run -d --name vm1 nginx  创建容器
e46ee9a2c78b241e07f04afa47997e616b6da61ce7ea702d8b98f31eeb7af056
[root@node11 ~]# brctl show 默认桥接到bridge
bridge name     bridge id               STP enabled     interfaces
docker0         8000.0242aee5f4c6       no              veth9cec61f

2).host 网络模式需要在容器创建时指定 --network=host
不使用桥接,直接使用和宿主机相同的网络位;
host 模式可以让容器共享宿主机网络栈,这样的好处是外部主机与容器直接通信,但是容器的网络缺少隔离性。

[root@node11 ~]# docker run -d --name vm2 --network host nginx
c54ec269fe548863c4087c4c8d42b764686056ffd3a096c1992819a3e3f02bdb
[root@node11 ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.0242aee5f4c6       no              veth9cec61f
[root@node11 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:a9:33:1d brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.11/24 brd 192.168.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fea9:331d/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ae:e5:f4:c6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aeff:fee5:f4c6/64 scope link
       valid_lft forever preferred_lft forever
109: veth9cec61f@if108: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 5a:e8:02:a7:ce:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::58e8:2ff:fea7:cef9/64 scope link
       valid_lft forever preferred_lft forever

3).none模式是指禁用网络功能,只有lo接口,在容器创建时使用 --network=none指定。
使用该网络可以放一些不让别人访问的东西。

[root@node11 ~]# docker run -it  --rm --network none busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
/ #

2. Docker自定义网络

  • 自定义网络模式,docker提供了三种自定义网络驱动:
    bridge
    overlay
    macvlan

1).bridge驱动类似默认的bridge网络模式,但增加了一些新的功能,
overlay和macvlan是用于创建跨主机网络。
建议使用自定义的网络来控制哪些容器可以相互通信,还可以自动DNS解析容器名称到IP地址。

创建自定义网桥
自创的有解析,可以ping通;

[root@node11 ~]# docker rm -f vm2
[root@node11 ~]# docker stop vm1
[root@node11 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@node11 ~]# docker run -d --name vm2 nginx
f272771bbd8a904e36869b06cd49c54e432a4692c66916b32c0229aea1454e9e
[root@node11 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS         PORTS     NAMES
f272771bbd8a   nginx     "/docker-entrypoint.…"   10 seconds ago   Up 9 seconds   80/tcp    vm2
[root@node11 ~]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.036 ms
^C
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.036/0.052/0.084/0.023 ms
[root@node11 ~]# docker inspect vm1
Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "f257f4a6cb8fd8b0ca74a63c3a6ddfebe39dc0d535cb9296a848439de36d82c5",
                    "EndpointID": "9e98316f146d221ad65750840d9c179fec3c7f6ab0e1e0478126770009836305",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
  • 还可以自己定义网段,在创建时指定参数:–subnet 、–gateway
  • 使用–ip参数可以指定容器ip地址,但必须是在自定义网桥上,默认的bridge模式不支持,同一网桥上的容器是可以互通的。
[root@node11 harbor]# docker rm -f vm1
vm1
[root@node11 harbor]# docker rm -f vm2
vm2
[root@node11 harbor]# docker network create --subnet 172.21.0.0/24 --gateway 172.21.0.1 mynet1
7810d9a35de25f6cb78346b087e27dfb7856756c6fdccb7911e90fa56d486246
[root@node11 harbor]# docker network inspect mynet1
IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.21.0.0/24",
                    "Gateway": "172.21.0.1"
                }
            ]
[root@node11 harbor]# docker run -d --name vm1 --network mynet1 nginx
83db0b0c0fa828e209a2ad24da3288cab824a7572791edd577886a7b28bd05a5
[root@node11 harbor]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS     NAMES
83db0b0c0fa8   nginx     "/docker-entrypoint.…"   5 seconds ago   Up 4 seconds   80/tcp    vm1
[root@node11 harbor]# docker inspect vm1
"NetworkID": "7810d9a35de25f6cb78346b087e27dfb7856756c6fdccb7911e90fa56d486246",
                    "EndpointID": "0e5321002fc016b54500add51da0f50d60f714a6701607ea7611c1ad774a7eb4",
                    "Gateway": "172.21.0.1",
                    "IPAddress": "172.21.0.2",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:15:00:02",
                    "DriverOpts": null
以上指定之后,在运行容器时可以指定IP;
[root@node11 harbor]# docker run -it --name vm2 --network mynet1 busybox
/ # ping vm1
PING vm1 (172.21.0.2): 56 data bytes
64 bytes from 172.21.0.2: seq=0 ttl=64 time=0.140 ms
64 bytes from 172.21.0.2: seq=1 ttl=64 time=0.059 ms
64 bytes from 172.21.0.2: seq=2 ttl=64 time=0.058 ms
^C
--- vm1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.058/0.085/0.140 ms
/ # ping vm2
PING vm2 (172.21.0.3): 56 data bytes
64 bytes from 172.21.0.3: seq=0 ttl=64 time=0.075 ms
64 bytes from 172.21.0.3: seq=1 ttl=64 time=0.045 ms
^C
--- vm2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.045/0.060/0.075 ms
/ #
[root@node11 harbor]# docker stop vm1
vm1
[root@node11 harbor]# docker run -d --name vm3 --network mynet1 nginx
22ac6958b576dca30be739a277620697426947d0aa6ee5453552a3c13e97f563
[root@node11 harbor]# docker inspect vm3  vm3占用了原来vm1的IP,IP是动态变更的
"NetworkID": "7810d9a35de25f6cb78346b087e27dfb7856756c6fdccb7911e90fa56d486246",
                    "EndpointID": "535ba7f35f90df6aea8a3439fb16effc585ef2dbaabfd270a1f68ad0ec227f03",
                    "Gateway": "172.21.0.1",
                    "IPAddress": "172.21.0.2",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:15:00:02",
                    "DriverOpts": null
[root@node11 harbor]# docker start vm1
vm1
[root@node11 harbor]# docker inspect vm1
 "NetworkID": "7810d9a35de25f6cb78346b087e27dfb7856756c6fdccb7911e90fa56d486246",
                    "EndpointID": "6e3e48d977000ada72c79b517a629c44e811376ac504264875801ff6e32d3c84",
                    "Gateway": "172.21.0.1",
                    "IPAddress": "172.21.0.3",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:15:00:03",
                    "DriverOpts": null
[root@node11 harbor]# docker attach vm2 
/ # ping vm1  解析自动发生变化
PING vm1 (172.21.0.3): 56 data bytes
64 bytes from 172.21.0.3: seq=0 ttl=64 time=0.102 ms
64 bytes from 172.21.0.3: seq=1 ttl=64 time=0.059 ms
^C
--- vm1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.059/0.080/0.102 ms
/ #
桥接到不同网桥上的容器,彼此是不通信的。
docker在设计上就是要隔离不同network的。
[root@node11 harbor]# docker network create --subnet 172.22.0.0/24 --gateway 172.22.0.1 mynet2
2185535f7b81f2125dc2366b33aa64287faeffa8bacef98f184ace8f4223dc09
[root@node11 harbor]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
f257f4a6cb8f   bridge    bridge    local
95062a580d6e   host      host      local
7810d9a35de2   mynet1    bridge    local
2185535f7b81   mynet2    bridge    local
b8ede4be7515   none      null      local
[root@node11 harbor]# docker run -it --name vm2 --network mynet2 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
133: eth0@if134: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:16:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.22.0.2/24 brd 172.22.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping vm1
ping: bad address 'vm1'
/ #
使两个不同网桥的容器通信:
使用 docker network connect 命令为 vm1 添加一块 my_net2 的网卡。
[root@node11 harbor]# docker network connect mynet1 vm2
[root@node11 harbor]# docker attach vm2 一个21网段的网卡,一个22网段的网卡
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
135: eth1@if136: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:16:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.22.0.2/24 brd 172.22.0.255 scope global eth1
       valid_lft forever preferred_lft forever
137: eth0@if138: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:15:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.21.0.2/24 brd 172.21.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping vm1  此时就可以ping同
PING vm1 (172.21.0.3): 56 data bytes
64 bytes from 172.21.0.3: seq=0 ttl=64 time=0.153 ms
64 bytes from 172.21.0.3: seq=1 ttl=64 time=0.075 ms
^C
--- vm1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.075/0.114/0.153 ms

3. Docker容器通信

  • 容器之间除了使用ip通信外,还可以使用容器名称通信。
    docker 1.10开始,内嵌了一个DNS server。
    dns解析功能必须在自定义网络中使用。
    启动容器时使用 --name 参数指定容器名称。
  • 容器之间除了使用ip通信外,还可以使用容器名称通信。
    docker 1.10开始,内嵌了一个DNS server。
    dns解析功能必须在自定义网络中使用。
    启动容器时使用 --name 参数指定容器名称。
[root@node11 harbor]# docker rm vm2
vm2
[root@node11 harbor]# docker run -it --rm --network container:vm1 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
128: eth0@if129: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:15:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.21.0.3/24 brd 172.21.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ #
  • 处于这个模式下的 Docker 容器会共享一个网络栈,这样两个容器之间可以使用localhost高效快速通信。
  • –link 可以用来链接2个容器。
    –link的格式:–link :alias
    name和id是源容器的name和id,alias是源容器在link下的别名。
[root@node11 harbor]# docker rm -f vm1
vm1
[root@node11 harbor]# docker run -d --name vm1 nginx
fd38715eecc0e4e8f0180d92a5f6b12e8cb3a1b49f3a36854ce592bb47d5827a
[root@node11 harbor]# docker run -it --rm --link vm1:webserver busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
145: eth0@if146: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # env
HOSTNAME=e667cd54b040
SHLVL=1
HOME=/root
WEBSERVER_PORT=tcp://172.17.0.2:80
WEBSERVER_NAME=/condescending_bartik/webserver
WEBSERVER_PORT_80_TCP_ADDR=172.17.0.2
TERM=xterm
WEBSERVER_PORT_80_TCP_PORT=80
WEBSERVER_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
WEBSERVER_PORT_80_TCP=tcp://172.17.0.2:80
WEBSERVER_ENV_PKG_RELEASE=1~bullseye
WEBSERVER_ENV_NGINX_VERSION=1.21.5
PWD=/
WEBSERVER_ENV_NJS_VERSION=0.7.1
/ # ping vm1
PING vm1 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.117 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.074 ms
^C
--- vm1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.074/0.095/0.117 ms
/ # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.2	web 64c84d7f690c nginx
172.17.0.3	c87cfc906b0e
/ # 

 当容器改变时,只会改变解析,但不会更改env变量。

  • 容器如何访问外网是通过iptables的SNAT实现的

 

  • 外网如何访问容器:
    端口映射,-p 选项指定映射端口
[root@node11 harbor]# docker rm -f vm1
vm1
[root@node11 harbor]# docker rm -f vm2
vm2
[root@node11 harbor]# docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@node11 harbor]# docker run -d --name vm1 -p 80:80 nginx
034a076fcf79febf9cac1bbe5e325dada7e26cd10af87938b5824b522e5991e1
[root@node11 harbor]# docker port vm1
80/tcp -> 0.0.0.0:80
80/tcp -> :::80
  • 外网访问容器用到了docker-proxy 和 iptables DNAT
    宿主机访问本机容器使用的是iptables DNAT
    外部主机访问容器或容器之间的访问是 docker-proxy 实现

 

4. 跨主机容器网络

  • 跨主机网络解决方案
    docker 原生的 overlay 和 macvlan
    第三方的flannel、weave、calico
  • 众多网络方案是如何与docker集成在一起的
    libnetwork docker容器网络库
    CNM (Container Network Model)这个模型对容器网络进行了抽象
  • CNM分三类组件
    Sandbox:容器网络栈,包含容器接口、dns、路由表。(namespace)
    Endpoint:作用是将sandbox接入network (veth pair)
    Network:包含一组endpoint,同一network的endpoint可以通信。
  • macvlan 网络方案实现
    Linux kernel 提供的一种网卡虚拟化技术。
    无需Linux bridge,直接使用物理接口,性能极好。
[root@node11 ~]# docker network rm mynet1
[root@node11 ~]# docker network rm mynet2
[root@node11 ~]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
f257f4a6cb8f   bridge    bridge    local
95062a580d6e   host      host      local
b8ede4be7515   none      null      local
[root@node11 ~]# docker rm -f vm1
为了方便起见,就不添加网卡,使用ens33网卡
[root@node11 ~]# ip link set ens33 promisc on打开ens33的混杂模式

node22主机:
[root@node22 docker]# ip link set ens33 promisc on
[root@node22 docker]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:6e:b8:d0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.22/24 brd 192.168.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe6e:b8d0/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:ad:dd:64:11 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

在两台 docker 主机上各创建 macvlan 网络;
在第二台主机上也运行起来,来实现跨主机之间的通信;
容器的接口直接与主机网卡连接,无需NAT或端口映射。

[root@node11 ~]# docker network create -d macvlan --subnet 172.20.0.0/24 --gateway 172.20.0.1 -o parent=ens33 mynet1
88413787b9500a31976e9fcd9d9d5ab2a5fc619510a21fa763582fc732487b64
[root@node11 ~]#  docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
f257f4a6cb8f   bridge    bridge    local
95062a580d6e   host      host      local
88413787b950   mynet1    macvlan   local
b8ede4be7515   none      null      local
[root@node11 ~]# docker network inspect mynet1
  },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "parent": "ens33"  定义接口
        },
        "Labels": {}
    }
]
node22主机:
[root@node22 docker]# docker network create -d macvlan --subnet 172.20.0.0/24 --gateway 172.20.0.1 -o parent=ens33 mynet1
6b2e0c1ce2bb81a6ea4fdd078bfde89643276fa9e1e156887b3c71fa596d4544
[root@node22 docker]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
da485c7bca9a   bridge    bridge    local
06bc5f10abfe   host      host      local
6b2e0c1ce2bb   mynet1    macvlan   local
2e3bcc3a8e54   none      null      local

[root@node11 ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.0242aee5f4c6       no
[root@node11 ~]# docker run -it --rm --network mynet1 --ip 172.20.0.10 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
153: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:14:00:0a brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.10/24 brd 172.20.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ #
[root@node22 docker]#  docker run -it --rm --network mynet1 --ip 172.20.0.11 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
4: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:14:00:0b brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.11/24 brd 172.20.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 172.20.0.10
PING 172.20.0.10 (172.20.0.10): 56 data bytes
64 bytes from 172.20.0.10: seq=0 ttl=64 time=0.538 ms
64 bytes from 172.20.0.10: seq=1 ttl=64 time=0.427 ms
^C
--- 172.20.0.10 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.427/0.482/0.538 ms
/ #

macvlan 会独占主机网卡,但可以使用 vlan 子接口实现多 macvlan 网络;vlan可以将物理二层网络划分为4094个逻辑网络,彼此隔离,vlan id 取值为1~4094。

[root@node11 ~]# docker network create -d macvlan --subnet 172.21.0.0/24 --gateway 172.21.0.1 -o parent=ens33.1 mynet2
e9325446401f393ad3732018783c7d7943269a80961c5e997249388729c736dc
[root@node11 ~]# docker network inspect mynet2
},
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "parent": "ens33.1"
        },
        "Labels": {}
    }
]
[root@node22 docker]#  docker network create -d macvlan --subnet 172.21.0.0/24 --gateway 172.21.0.1 -o parent=ens33.1 mynet2
3bedcea860fffeb9d859f41cf7d189b951597ca92043692e83f1c1eb5b8f2903
[root@node11 ~]# docker run -it --rm --network mynet2 --ip 172.21.0.10 busybox                            / # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
155: eth0@if154: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:15:00:0a brd ff:ff:ff:ff:ff:ff
    inet 172.21.0.10/24 brd 172.21.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ #
[root@node22 docker]# docker run -it --rm --network mynet2 --ip 172.21.0.11 busybox
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 02:42:ac:15:00:0b brd ff:ff:ff:ff:ff:ff
    inet 172.21.0.11/24 brd 172.21.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 172.21.0.10
PING 172.21.0.10 (172.21.0.10): 56 data bytes
64 bytes from 172.21.0.10: seq=0 ttl=64 time=0.608 ms
64 bytes from 172.21.0.10: seq=1 ttl=64 time=0.345 ms
^C
--- 172.21.0.10 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.345/0.476/0.608 ms
/ #

 

macvlan 网络间的隔离和连通;
macvlan 网络在二层上是隔离的,所以不同 macvlan 网络的容器是不能通信的;可以在三层上通过网关将 macvlan 网络连通起来;
docker本身不做任何限制,像传统vlan网络那样管理即可。

docker network子命令
connect 连接容器到指定网络
create 创建网络
disconnect 断开容器与指定网络的连接
inspect 显示指定网络的详细信息
ls 显示所有网络
rm 删除网络

 

相关文章:

  • 【QML】 如何导入QML文档目录
  • 【前端】命令行基础,linux常用命令
  • 【ZYNQ-嵌入式】zynq学习笔记(二)—— GIPO的硬件配置和软件配置
  • vue echarts 镂空饼图配置
  • 项目二:《贪吃蛇》
  • 企业运维容器之 docker仓库
  • 快速排序sort 第k个数
  • uniapp开发微信小程序Error in onLoad hook: “SyntaxError: Unexpected end of JSON input“
  • MySQL当前链接状态查询
  • 打破平台限制,小程序如何在硬件设备上运行?
  • ORA-01017(:用户名/口令无效; 登录被拒绝)Oracle新建用户并授权
  • PostgreSQL的学习心得和知识总结(九十九)|语法级自上而下完美实现达梦数据库的 TOP语法功能 的实现方案
  • Mybatis-Plus批量插入应该怎么用
  • (Note)C++中的继承方式
  • qemu gutest network configuration
  • [译] 怎样写一个基础的编译器
  • “大数据应用场景”之隔壁老王(连载四)
  • ABAP的include关键字,Java的import, C的include和C4C ABSL 的import比较
  • CentOS7简单部署NFS
  • es6--symbol
  • GraphQL学习过程应该是这样的
  • iOS编译提示和导航提示
  • JS变量作用域
  • js递归,无限分级树形折叠菜单
  • MobX
  • mysql innodb 索引使用指南
  • PHP那些事儿
  • Python 反序列化安全问题(二)
  • python 学习笔记 - Queue Pipes,进程间通讯
  • Rancher如何对接Ceph-RBD块存储
  • spring-boot List转Page
  • SSH 免密登录
  • vue-cli3搭建项目
  • 笨办法学C 练习34:动态数组
  • 高性能JavaScript阅读简记(三)
  • 基于Javascript, Springboot的管理系统报表查询页面代码设计
  • 排序算法之--选择排序
  • 前端每日实战:61# 视频演示如何用纯 CSS 创作一只咖啡壶
  • 巧用 TypeScript (一)
  • 人脸识别最新开发经验demo
  • 如何抓住下一波零售风口?看RPA玩转零售自动化
  • 消息队列系列二(IOT中消息队列的应用)
  • ​LeetCode解法汇总2808. 使循环数组所有元素相等的最少秒数
  • ​插件化DPI在商用WIFI中的价值
  • !!Dom4j 学习笔记
  • #define用法
  • (12)Hive调优——count distinct去重优化
  • (C++)栈的链式存储结构(出栈、入栈、判空、遍历、销毁)(数据结构与算法)
  • (C语言)共用体union的用法举例
  • (Matalb时序预测)PSO-BP粒子群算法优化BP神经网络的多维时序回归预测
  • (Pytorch框架)神经网络输出维度调试,做出我们自己的网络来!!(详细教程~)
  • (附源码)ssm基于web技术的医务志愿者管理系统 毕业设计 100910
  • (黑马出品_高级篇_01)SpringCloud+RabbitMQ+Docker+Redis+搜索+分布式
  • (转)EXC_BREAKPOINT僵尸错误
  • (转)http协议