meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
vm:proxmox:cluster [2020/08/09 17:55] – niziak | vm:proxmox:cluster [2020/09/16 07:52] (current) – niziak | ||
---|---|---|---|
Line 11: | Line 11: | ||
* [[vm: | * [[vm: | ||
**NOTE**: make copy of ''/ | **NOTE**: make copy of ''/ | ||
+ | **NOTE**: create all custom mounts or ZFS pools before joining cluster. | ||
==== local-ZFS replication ==== | ==== local-ZFS replication ==== | ||
Line 40: | Line 41: | ||
- | + | '' | |
- | Datacenter --> Cluster --> Create cluster | + | |
Line 57: | Line 57: | ||
</ | </ | ||
Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group. | Joining process will be waiting infinitely until iSCSI device will be connected. During this time it is possible to disable CHAP authentication in remote iSCSI target, or provide common credentials for Target Portal Group. | ||
+ | |||
==== iSCSI issue with CHAP ==== | ==== iSCSI issue with CHAP ==== | ||
Line 64: | Line 65: | ||
Is it possible to add new acl for target on NAS326 using '' | Is it possible to add new acl for target on NAS326 using '' | ||
- | |||
- | |||
- | ====== CEPH ====== | ||
- | |||
- | ===== Prepare ===== | ||
- | |||
- | Read Proxmox CEPH requirements. It requires at least one spare hard drive on each node. Topic for later. | ||
- | |||
- | ===== Installation ===== | ||
- | |||
- | * On one of node: | ||
- | * Datacenter --> Cepth --> Install Ceph-nautilus | ||
- | * Configuration tab | ||
- | * First Ceph monitor - set to current node. NOTE. Not possible to use other nodes now because there is no Ceph installed on it | ||
- | * Repeat installation on each node. Configuration will be detected automatically. | ||
- | * On each node - add additional monitors: | ||
- | * Select node --> Ceph --> Monitor | ||
- | * " | ||
- | |||
- | ===== create OSD ===== | ||
- | |||
- | Create Object Storage Daemon | ||
- | |||
- | On every node in cluster | ||
- | * Select host node | ||
- | * Go to menu '' | ||
- | * '' | ||
- | * select spare hard disk | ||
- | * leave other defaults | ||
- | * press '' | ||
- | |||
- | ===== create pool ===== | ||
- | |||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | * NOTE: It's also important to know that the PG count can be increased, but NEVER decreased without destroying / recreating the pool. However, increasing the PG Count of a pool is one of the most impactful events in a Ceph Cluster, and should be avoided for production clusters if possible. | ||
- | * [[https:// | ||
- | |||
- | |||
- | ==== pool benchmark | ||
- | Benchmarks for pool name ' | ||
- | <code bash> | ||
- | # Write benchmark | ||
- | rados -p rbd bench 10 write --no-cleanup | ||
- | |||
- | # Read performance | ||
- | rados -p rbd bench 10 seq | ||
- | </ | ||