meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
vm:proxmox:cluster [2020/04/29 20:14] niziakvm:proxmox:cluster [2020/09/14 08:52] niziak
Line 40: Line 40:
  
  
- +''Datacenter'' --> ''Cluster'' --> ''Create cluster''
-Datacenter --> Cluster --> Create cluster+
  
  
Line 57: Line 56:
 </code> </code>
 Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group. Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group.
 +
  
 ==== iSCSI issue with CHAP ==== ==== iSCSI issue with CHAP ====
Line 65: Line 65:
  
  
- 
-====== CEPH ====== 
- 
-===== Prepare ===== 
- 
-Read Proxmox CEPH requirements. It requires at least one spare hard drive on each node. Topic for later. 
- 
-===== Installation ===== 
- 
-  * On one of node: 
-    * Datacenter --> Cepth --> Install Ceph-nautilus 
-    * Configuration tab 
-    * First Ceph monitor - set to current node. NOTE. Not possible to use other nodes now because there is no Ceph installed on it 
-  * Repeat installation on each node. Configuration will be detected automatically. 
-  * On each node - add additional monitors: 
-    * Select node --> Ceph --> Monitor 
-      * "Create" button in Monitor section, and select available nodes. 
- 
-===== create OSD ===== 
- 
-Create Object Storage Daemon 
- 
- 
- 
- 
-