meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
vm:proxmox:ceph [2021/01/27 14:45] niziak |
vm:proxmox:ceph [2022/09/26 21:40] (current) niziak |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== CEPH ====== | ====== CEPH ====== | ||
+ | |||
+ | * OSD - (Object Storage Device) - Storage on a physical device or logical unit (LUN). Typically, data on an OSD is configured as a btrfs file system to take advantage of its snapshot features. However, other file systems such as XFS can also be used. | ||
+ | * MON - (Monitor) - A Ceph component used for tracking active and failed nodes in a Storage Cluster. The Ceph Monitor (MON) [5] maintains a master copy of the cluster map. For high availability, you need at least 3 monitors. One monitor will already be installed if you used the installation wizard. You won’t need more than 3 monitors, as long as your cluster is small to medium-sized. Only really large clusters will require more than th | ||
+ | * MGR - (Manager) - The Ceph manager software, which collects all the state from the whole cluster in one place. | ||
+ | * MDS - (Meta Data Server) | ||
+ | * RBD - (RADOS Block Device) - A Ceph component that provides access to Ceph storage as a thinly provisioned block device. When an application writes to a Block Device, Ceph implements data redundancy and enhances I/O performance by replicating and striping data across the Storage Cluster. | ||
+ | |||
+ | |||
+ | ===== Ceph RADOS Block Devices (RBD) ===== | ||
+ | |||
+ | CEPH provides only a pools of object. To use it for VMs block devices additional layer (RBD) is needed. | ||
+ | It can be created manually or during CEPH pool creation (option ''Add as Storage'') | ||
+ | |||
+ | ===== Ceph FS ===== | ||
+ | It is implementation of POSIX compliant FS top of CEPH POOL. | ||
+ | It requires one pool for data (block data) and to keep filesystem information (metadata). | ||
+ | Performance strictly depends on metadata pool, so it is recommended to use for backups files. | ||
+ | |||
+ | Used ports: | ||
+ | * TCP 6789 - monitors | ||
+ | * TCP 6800-7300 - OSDs: | ||
+ | - One for talking to clients and monitors. | ||
+ | - One for sending data to other OSDs (replication, backfill and recovery). | ||
+ | - One for heartbeating. | ||
+ | * TCP 7480 - CEPH Object Gateway | ||
+ | * The MDS also uses a port above 6800. | ||
===== Prepare ===== | ===== Prepare ===== | ||
Line 35: | Line 61: | ||
</code> | </code> | ||
+ | ===== restart ceph services ===== | ||
+ | <code bash> | ||
+ | systemctl stop ceph\*.service ceph\*.target | ||
+ | systemctl start ceph.target | ||
+ | </code> | ||
===== create pool ===== | ===== create pool ===== | ||
Line 57: | Line 88: | ||
</code> | </code> | ||
- | ===== Ceph RADOS Block Devices (RBD) ===== | ||
- | |||
- | CEPH provides only a pools of object. To use it for VMs block devices additional layer (RBD) is needed. | ||
- | It can be created manually or during CEPH pool creation (option ''Add as Storage'') | ||
- | |||
- | ===== Ceph FS ===== | ||
- | It is implementation of POSIX compliant FS top of CEPH POOL. | ||
- | It requires one pool for data (block data) and to keep filesystem information (metadata). | ||
- | Performance strictly depends on metadata pool, so it is recommended to use for backups files. | ||