meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
vm:proxmox:cluster [2020/04/23 10:56] – niziak | vm:proxmox:cluster [2020/09/16 07:52] (current) – niziak | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Cluster ====== | ====== Cluster ====== | ||
+ | |||
+ | ==== Different nodes configuration ==== | ||
+ | |||
+ | Cluster is using ONE shared content of ''/ | ||
+ | **IMPACT**: | ||
+ | * storage configuration is one for whole cluster. To create local node storage, storage should be limited only own node. | ||
+ | * Joining pure Debian based node (BTRFS FS) to cluster will result with " | ||
+ | * Custom storage config on nodes can disappear because only one shared config is used | ||
+ | * Storage removed due to cluster joining can be added manually under another name and then restricted to specific node. | ||
+ | * [[vm: | ||
+ | **NOTE**: make copy of ''/ | ||
+ | **NOTE**: create all custom mounts or ZFS pools before joining cluster. | ||
+ | |||
+ | ==== local-ZFS replication ==== | ||
+ | |||
+ | It is possible to schedule replication of containers/ | ||
+ | It gives data redundancy and reduces downtime when container is moved between local storage on each node. | ||
+ | |||
+ | |||
+ | ==== shared storage ==== | ||
+ | * it is recommended to use shared storage available from all nodes. | ||
+ | * for file pool types (iso, backups) easiest is to use NFS/CIFS share. | ||
+ | * | ||
+ | |||
+ | |||
+ | ==== Live migration ==== | ||
+ | * It is only possible if VM disc resides on shared network storage (available from all nodes) | ||
+ | |||
+ | |||
+ | |||
+ | ===== Preparation ===== | ||
+ | |||
+ | Second network interface with separate internal IP network is recommended for redundancy and shared storage bandwidth. | ||
+ | |||
+ | |||
+ | ===== Creation ===== | ||
+ | |||
From Proxmox 6.0 it is possible to use GUI t ocreate and join cluster. | From Proxmox 6.0 it is possible to use GUI t ocreate and join cluster. | ||
- | Second network interface with separate internal IP network is recommended for redundacy. | ||
- | Datacenter --> Cluster --> Create cluster | + | '' |
+ | |||
+ | ===== Joining ===== | ||
Joining other nodes | Joining other nodes | ||
- | ===== iSCSI issue with CHAP ===== | + | Join can stuck when there is problem with iSCSI connection from newly added node. |
+ | Check journal on joining node; | ||
+ | < | ||
+ | kernel: scsi host9: iSCSI Initiator over TCP/IP | ||
+ | pvestatd[2097]: | ||
+ | iscsid[1060]: | ||
+ | iscsid[1060]: | ||
+ | </ | ||
+ | Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable | ||
- | When CHAP is used for iSCSI (manually configured from Debian console), joined node wants to connect to the same iSCSI target using own initiator name and own local configuration. Of coruse NAS326 is configured with only one CHAP login and passwod, so joining is not possible. | ||
- | Is it possible to add new acl for target on NAS326 using '' | + | ==== iSCSI issue with CHAP ==== |
+ | When CHAP is used for iSCSI (manually configured from Debian console), joined node wants to connect to the same iSCSI target using own initiator name and own local configuration. Of coruse NAS326 is configured with only one CHAP login and passwod, so joining is not possible. | ||
+ | Is it possible to add new acl for target on NAS326 using '' | ||