====== Network planning ====== * [[https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/|CEPH Network Configuration Reference]] * [[https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/configuration_guide/ceph-network-configuration|Ceph network configuration]] * [[https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/ceph_configuration_guide/network-configuration-reference]] * [[https://ceph-users.ceph.narkive.com/wTDiWx2w/have-2-different-public-networks]] Default IP for each node is defined in: ''/etc/pve/.members'' ====== Preffered network layout ====== * WAN interface (to give VMs access to Internet) * 10GBE for CEPH private * 10GBE for Proxmox migration network (can be on the same as CEPH private) * 0.1 / 1GB for Corosync only Do not try to use classic Linux Bridge STP! It is too slow and if corosync is on the same network will cause node isolation and reboots. My simple and preferred way is to use Linux bond to bond 10Gbe and 1Gbe together with ''active-backup'' mode. ===== corosync ===== Proxmox Cluster data - this net cannot be overloaded with data traffic because cluster can isolate VM. ===== CEPH ===== * private network / cluster network: * hearbeat * object replication * recovery traffic (in general inter-OSD traffic) * public network - MONS, MDSs, CEPH Clients [global] cluster_network = 192.168.0.232/22 public_network = 192.168.0.232/22 [mon.pve1] public_addr = 192.168.0.232 [mon.pve2] public_addr = 192.168.0.233