meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
vm:proxmox:cluster [2020/08/09 17:49] niziakvm:proxmox:cluster [2020/09/16 07:52] (current) niziak
Line 11: Line 11:
      * [[vm:proxmox:zfs#create_local-zfs|create "local-zfs"]]      * [[vm:proxmox:zfs#create_local-zfs|create "local-zfs"]]
 **NOTE**: make copy of ''/etc/pve/storage.cfg'' from each node before joining to cluster. **NOTE**: make copy of ''/etc/pve/storage.cfg'' from each node before joining to cluster.
 +**NOTE**: create all custom mounts or ZFS pools before joining cluster.
  
 ==== local-ZFS replication ==== ==== local-ZFS replication ====
Line 40: Line 41:
  
  
- +''Datacenter'' --> ''Cluster'' --> ''Create cluster''
-Datacenter --> Cluster --> Create cluster+
  
  
Line 57: Line 57:
 </code> </code>
 Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group. Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group.
 +
  
 ==== iSCSI issue with CHAP ==== ==== iSCSI issue with CHAP ====
Line 64: Line 65:
 Is it possible to add new acl for target on NAS326 using ''targetcli'' command, or disable CHAP. Is it possible to add new acl for target on NAS326 using ''targetcli'' command, or disable CHAP.
  
- 
- 
-====== CEPH ====== 
- 
-===== Prepare ===== 
- 
-Read Proxmox CEPH requirements. It requires at least one spare hard drive on each node. Topic for later. 
- 
-===== Installation ===== 
- 
-  * On one of node: 
-    * Datacenter --> Cepth --> Install Ceph-nautilus 
-    * Configuration tab 
-    * First Ceph monitor - set to current node. NOTE. Not possible to use other nodes now because there is no Ceph installed on it 
-  * Repeat installation on each node. Configuration will be detected automatically. 
-  * On each node - add additional monitors: 
-    * Select node --> Ceph --> Monitor 
-      * "Create" button in Monitor section, and select available nodes. 
- 
-===== create OSD ===== 
- 
-Create Object Storage Daemon 
- 
-On every node in cluster 
-  * Select host node 
-  * Go to menu ''Ceph'' -> ''OSD'' 
-  * ''Create: OSD'' 
-    * select spare hard disk 
-    * leave other defaults 
-    * press ''Create'' 
- 
-===== create pool ===== 
- 
-  * ''Size'' - number of replicas for pool 
-  * ''Min. Size'' 
-  * ''Crush Rule'' - only possible to choose 'replicated_rule'' 
-  * ''pg_num'' (Placement Groups) use [[https://ceph.io/pgcalc/|Ceph PGs per Pool Calculator]] to calculate ''pg_num'' 
-    * NOTE: It's also important to know that the PG count can be increased, but NEVER decreased without destroying / recreating the pool. However, increasing the PG Count of a pool is one of the most impactful events in a Ceph Cluster, and should be avoided for production clusters if possible. 
-    * [[https://docs.ceph.com/docs/master/rados/operations/placement-groups/|Placement Groups]] 
- 
-  
-==== pool benchmark  ==== 
-<code bash> 
-# pool name 'rbd' 
-rados -p rbd bench 10 write --no-cleanup 
-</code>