meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
vm:proxmox:cluster [2020/04/26 18:52] – niziak | vm:proxmox:cluster [2020/09/14 07:42] – niziak | ||
---|---|---|---|
Line 4: | Line 4: | ||
Cluster is using ONE shared content of ''/ | Cluster is using ONE shared content of ''/ | ||
- | IMPACT: | + | **IMPACT**: |
* storage configuration is one for whole cluster. To create local node storage, storage should be limited only own node. | * storage configuration is one for whole cluster. To create local node storage, storage should be limited only own node. | ||
- | * Joining pure Debian based node (BTRFS FS) to cluster will result with " | + | * Joining pure Debian based node (BTRFS FS) to cluster will result with " |
- | To prevent creation unusable storage, storage can be limited to some nodes. Please edit storage and limit it to own node. | + | * Custom storage |
- | * Some custom storages | + | |
* Storage removed due to cluster joining can be added manually under another name and then restricted to specific node. | * Storage removed due to cluster joining can be added manually under another name and then restricted to specific node. | ||
+ | * [[vm: | ||
+ | **NOTE**: make copy of ''/ | ||
+ | |||
+ | ==== local-ZFS replication ==== | ||
+ | |||
+ | It is possible to schedule replication of containers/ | ||
+ | It gives data redundancy and reduces downtime when container is moved between local storage on each node. | ||
- | NOTE: make copy of ''/ | ||
==== shared storage ==== | ==== shared storage ==== | ||
Line 53: | Line 58: | ||
Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group. | Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group. | ||
- | ==== iSCSI issue with CHAP ==== | + | ===== Remove failed node ===== |
+ | <code bash> | ||
+ | pvecm status | ||
- | When CHAP is used for iSCSI (manually configured from Debian console), joined node wants to connect to the same iSCSI target using own initiator name and own local configuration. Of coruse NAS326 is configured with only one CHAP login and passwod, so joining is not possible. | + | Highest expected: 4 |
+ | </ | ||
+ | <code bash> | ||
+ | pvecm delnode < | ||
- | Is it possible to add new acl for target on NAS326 using '' | + | Killing node 2 |
+ | </ | ||
+ | <code bash> | ||
+ | pvecm status | ||
+ | Highest expected: 3 | ||
+ | </ | ||
+ | And reload WebUI to refresh cluster nodes. | ||
+ | ==== iSCSI issue with CHAP ==== | ||
- | ====== CEPH ====== | + | When CHAP is used for iSCSI (manually configured from Debian console), joined |
- | + | ||
- | ===== Installation ===== | + | |
- | + | ||
- | * On one of node: | + | |
- | * Datacenter --> Cepth --> Install Ceph-nautilus | + | |
- | * Configuration tab | + | |
- | * First Ceph monitor - set to current node. NOTE. Not possible to use other nodes now because there is no Ceph installed on it | + | |
- | * Repeat installation on each node. Configuration will be detected automatically. | + | |
- | * On each node - add additional monitors: | + | |
- | * Select node --> Ceph --> Monitor | + | |
- | * " | + | |
- | + | ||
- | ===== create OSD ===== | + | |
- | + | ||
- | Create Object Storage Daemon | + | |
+ | Is it possible to add new acl for target on NAS326 using '' | ||
- | |||