meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
vm:proxmox:cluster [2020/04/23 13:39] – niziak | vm:proxmox:cluster [2020/09/16 05:52] (current) – niziak | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Cluster ====== | ====== Cluster ====== | ||
+ | |||
+ | ==== Different nodes configuration ==== | ||
+ | |||
+ | Cluster is using ONE shared content of ''/ | ||
+ | **IMPACT**: | ||
+ | * storage configuration is one for whole cluster. To create local node storage, storage should be limited only own node. | ||
+ | * Joining pure Debian based node (BTRFS FS) to cluster will result with " | ||
+ | * Custom storage config on nodes can disappear because only one shared config is used | ||
+ | * Storage removed due to cluster joining can be added manually under another name and then restricted to specific node. | ||
+ | * [[vm: | ||
+ | **NOTE**: make copy of ''/ | ||
+ | **NOTE**: create all custom mounts or ZFS pools before joining cluster. | ||
+ | |||
+ | ==== local-ZFS replication ==== | ||
+ | |||
+ | It is possible to schedule replication of containers/ | ||
+ | It gives data redundancy and reduces downtime when container is moved between local storage on each node. | ||
+ | |||
+ | |||
+ | ==== shared storage ==== | ||
+ | * it is recommended to use shared storage available from all nodes. | ||
+ | * for file pool types (iso, backups) easiest is to use NFS/CIFS share. | ||
+ | * | ||
+ | |||
+ | |||
+ | ==== Live migration ==== | ||
+ | * It is only possible if VM disc resides on shared network storage (available from all nodes) | ||
+ | |||
+ | |||
===== Preparation ===== | ===== Preparation ===== | ||
Line 5: | Line 34: | ||
Second network interface with separate internal IP network is recommended for redundancy and shared storage bandwidth. | Second network interface with separate internal IP network is recommended for redundancy and shared storage bandwidth. | ||
- | ==== different nodes ==== | ||
- | |||
- | There is one shared cluster storage configuration. For example " | ||
- | TBD: This let migration of VMS to the same | ||
- | |||
- | Joining pure Debian based node (BTRFS FS) to cluster will result with " | ||
- | To prevent creation unusable storage, storage can be limited to some nodes. Please edit storage and limit it to own node. | ||
- | Storage removed due to cluster joining can be added manually under another name and then restricted to specific node. | ||
===== Creation ===== | ===== Creation ===== | ||
Line 20: | Line 41: | ||
- | + | '' | |
- | Datacenter --> Cluster --> Create cluster | + | |
Line 37: | Line 57: | ||
</ | </ | ||
Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group. | Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group. | ||
+ | |||
==== iSCSI issue with CHAP ==== | ==== iSCSI issue with CHAP ==== | ||
Line 45: | Line 66: | ||
- | |||
- | |||
- | |||
- | |||
- | ====== CEPH ====== | ||
- | |||
- | ===== Installation ===== | ||
- | |||
- | * On one of node: | ||
- | * Datacenter --> Cepth --> Install Ceph-nautilus | ||
- | * Configuration tab | ||
- | * First Ceph monitor - set to current node. NOTE. Not possible to use other nodes now because there is no Ceph installed on it | ||
- | * Repeat installation on each node. Configuration will be detected automatically. | ||
- | * On each node - add additional monitors: | ||
- | * Select node --> Ceph --> Monitor | ||
- | * " | ||
- | |||
- | ===== create OSD ===== | ||
- | |||
- | Create Object Storage Daemon | ||
- | |||
- | |||
- | |||
- | |||
- | |||