meta data for this page
This is an old revision of the document!
Cluster
Preparation
Second network interface with separate internal IP network is recommended for redundancy and shared storage bandwidth.
different nodes
There is one shared cluster storage configuration. For example “local-zfs” storage will be configured on each node. TBD: This let migration of VMS to the same
Joining pure Debian based node (BTRFS FS) to cluster will result with “local-zfs” storage visible on Debian BTRFS node with ?
sign.
To prevent creation unusable storage, storage can be limited to some nodes. Please edit storage and limit it to own node.
Storage removed due to cluster joining can be added manually under another name and then restricted to specific node.
Creation
From Proxmox 6.0 it is possible to use GUI t ocreate and join cluster.
Datacenter –> Cluster –> Create cluster
Joining
Joining other nodes
Join can stuck when there is problem with iSCSI connection from newly added node. Check journal on joining node;
kernel: scsi host9: iSCSI Initiator over TCP/IP pvestatd[2097]: command '/usr/bin/iscsiadm --mode node --targetname iqn.2020-04.com.zyxel:nas326-iscsi-pve1-isos-target.tjlintux --login' failed: exit code 24 iscsid[1060]: conn 0 login rejected: initiator failed authorization with target iscsid[1060]: Connection135:0 to [target: iqn.2020-04.com.zyxel:nas326-iscsi-pve1-isos-target.tjlintux, portal: 192.168.28.150,3260] through [iface: default] is shutdown.
Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group.
iSCSI issue with CHAP
When CHAP is used for iSCSI (manually configured from Debian console), joined node wants to connect to the same iSCSI target using own initiator name and own local configuration. Of coruse NAS326 is configured with only one CHAP login and passwod, so joining is not possible.
Is it possible to add new acl for target on NAS326 using targetcli
command, or disable CHAP.
CEPH
Installation
- On one of node:
- Datacenter –> Cepth –> Install Ceph-nautilus
- Configuration tab
- First Ceph monitor - set to current node. NOTE. Not possible to use other nodes now because there is no Ceph installed on it
- Repeat installation on each node. Configuration will be detected automatically.
- On each node - add additional monitors:
- Select node –> Ceph –> Monitor
- “Create” button in Monitor section, and select available nodes.
create OSD
Create Object Storage Daemon