In default docker networking:
docker-proxy
processBy default docker try to choose not conflicting addressing scheme:
172.17.0.0/16", 172.18.0.0/16", "172.19.0.0/16", "172.20.0.0/14", "172.24.0.0/14" "172.28.0.0/14", "192.168.0.0/16"
but sometimes it doesn't work.
{ "default-address-pools": [ {"base":"172.17.0.0/16","size":24} ] }
Will assign 16 bit class for docker daemon, and docker daemon will create 24 bit network per each network.
Another example:
{ "bip": "10.200.0.1/24", "default-address-pools":[ {"base":"10.201.0.0/16","size":24}, {"base":"10.202.0.0/16","size":24} ] }
Idea is, how to start multiple containers, serving different services on the same port, but different IP. Similar to use bridged network with VirtualBox.
http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/ http://stackoverflow.com/questions/26539727/giving-a-docker-container-a-routable-ip-address
Simply add additional IP to one of host network interfaces. Then use port mapping to host IP:port during Docker start. PROBLEM: If there is a host service listening on all interfaces, it is not possible to use conflicting port.
NOT TESTED.
Set docker internal bridge name to real one on host. NOTE: Docker will manipulate host bridge (assign configured address!) https://serverfault.com/questions/958367/how-do-i-give-a-docker-container-its-own-routable-ip-on-the-original-network
Reference: https://linux-blog.anracom.com/tag/linux-bridge-linking/
Create virtual adapter pair:
ip link add dev veth_docker_lan type veth peer name veth_br-lan
Add each adapter to one of bridges:
brctl addif docker_lan veth_docker_lan ip link set veth_docker_lan up brctl addif br-lan veth_br-lan ip link set veth_br-lan up
ISSUES: In theory it works, but problem with itpables and routing. Conntrack cannot see packets (different NS ?), so all packets are treated as INVALID on firewall.
Before MACVLAN, if you wanted to connect to physical network from a VM or namespace, you would have needed to create TAP/VETH devices and attach one side to a bridge and attach a physical interface to the bridge on the host at the same time, as shown below. Now, with MACVLAN, you can bind a physical interface that is associated with a MACVLAN directly to namespaces, without the need for a bridge.
NOTE: Both modes requires support from HW to use multiple MAC. Without it device needs to be switched into promiscuous mode, which is not easy. Working methods:
https://hicu.be/bridge-vs-macvlan Macvlan modes:
Issue with bridge:
There can be only one macvlan network with the same subnet and gateway. So better is to create network manually:
docker network create --driver=macvlan \ -o parent="br0" \ -o mode="bridge" \ --subnet="192.168.0.0/22" \ --gateway="192.168.0.1" \ real_lan
and then attach containers to existing network:
version: '2' services: myservice: networks: lan: ipv4_address: "192.168.0.241" networks: lan: external: name: real_lan
or
docker network connect --ip="192.168.0.241" real_lan myservice
Linux Macvlan interface types are not able to ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host's eth0 it will not work. That traffic is explicitly filtered by the kernel to offer additional provider isolation and security. This is a common gotcha when a user first uses those Linux interface types since it is natural to ping local addresses when testing.
http://blog.oddbit.com/2018/03/12/using-docker-macvlan-networks/ - especially comments with solution to force docker to use existing bridge on host to avoid macvlan.
https://raid-zero.com/2017/08/02/exploring-docker-networking-host-none-and-macvlan/3/
As noted before, there is no possibility to communicate with real host IP address.
(possible solution with separate routed network and IPVlan L3 mode)
Workaround: host IP address has to be removed from network interface and used in macvlan interface. Do not touch real host IP address it is enough to use address-less routing.
Scenario:
Create macvlan0 interface with dummy IP address, and route traffic to dockers into this interface:
auto macvlan0 iface macvlan0 inet static address 192.168.143.91/32 pre-up ip link add macvlan0 link br0 type macvlan mode bridge post-down ip link del macvlan0 link br0 type macvlan mode bridge post-up ip r add 192.168.0.242 dev macvlan0 src <host_ip> post-up ip r add 192.168.0.241 dev macvlan0 src <host_ip>
Problem: containers with default network settings (172.22.0.0) cannot communicate with 192.168.0.242 Docker creates default outgoing NAT rules for every container. From unknown reason automatic NAT rule using MASQUERADE makes SNAT translation to dummy macvlan0 IP 192.168.143.91 despite of routing rule which should force source address to <host_ip>. Solution for this is to insert own rule before docker's rules:
auto macvlan0 iface macvlan0 inet static address 192.168.143.91/32 pre-up ip link add macvlan0 link br0 type macvlan mode bridge post-down ip link del macvlan0 link br0 type macvlan mode bridge post-up ip r add 192.168.0.242 dev macvlan0 src <host_ip> post-up ip r add 192.168.0.241 dev macvlan0 src <host_ip> post-up iptables -t nat -I POSTROUTING -d 192.168.0.242 -j SNAT --to-source <host_ip> post-up iptables -t nat -I POSTROUTING -d 192.168.0.241 -j SNAT --to-source <host_ip>
After above fixes, there is traffic from docker 192.168.0.242 to host 192.168.0.231 and vice versa. OpenVPN traffic also works to any host in network 192.168.0.0 but not to host 192.168.0.231. As workaround simple I've enabled NAT on OpenVPN docker:
iptables -t nat -A POSTROUTING -s 10.1.1.0/24 -o eth0 -d 192.168.0.231 -j MASQUERADE