meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
vm:proxmox:storage [2020/04/24 11:25]
niziak
vm:proxmox:storage [2020/04/26 16:57] (current)
niziak
Line 1: Line 1:
 ====== Storage ====== ====== Storage ======
 +
 +===== Terms =====
 +  * **shared** ​
 +    * do not set local storage as shared, because content on each node is different
 +    * One major benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime, as all nodes in the cluster have direct access to VM disk images. There is no need to copy VM image data, so live migration is very fast in that case.
 +  * **thin-provisioning** - allocates block when they are written.
 +
 +===== Content types =====
 +
 +^  content types     ​^ ​ type    ^ description ​                  ^ path ^
 +|   Disk image       ​| ​ images ​ | KVM-Qemu VM images (VM disks) | ''​local:​230/​example-image.raw''​ | 
 +|  ISO image         ​| ​  ​iso ​   |                               | ''​local:​iso/​debian-501-amd64-netinst.iso''​ |
 +| Container template |  vztmpl ​ |                               | ''​local:​vztmpl/​debian-5.0-joomla_1.5.9-1_i386.tar.gz''​ |
 +| VZDump backup file |  backup ​ |                                |
 +| Container ​         |  roodir ​ |  Allow to store container data |
 +|                    |  none    | prevent using block device directly for VMs (to create LVM on top) | |
 +| Snippets ​          | snippets | aka hookscripts (for example guest hook scripts) | ''<​storage>:​snippets/<​file>''​ |
 +
 +''​iscsi-storage:​0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61''​
 +
 +==== File level storage dir layout ====
 +  * images - (VM images) - ''​images/<​VMID>/''​
 +    * raw, qcow2, vmdk
 +  * iso - (ISO images) - ''​template/​iso/''​
 +  * vztmpl - (Container templates) - ''​template/​cache/''​
 +  * backup - (Backup files) - ''​dump/''​
 +  * snippets - (Snippets) - ''​snippets/''​
  
 ===== Default storage for ZFS  ===== ===== Default storage for ZFS  =====
   * **local**: file-level storage - you can upload iso images and place backups there.   * **local**: file-level storage - you can upload iso images and place backups there.
   * **local-zfs**:​ this is used to store VM images.   * **local-zfs**:​ this is used to store VM images.
 +
 Note: Both resides on the same zfs pool. Note: Both resides on the same zfs pool.
  
Line 16: Line 44:
         content images,​rootdir         content images,​rootdir
 </​file>​ </​file>​
 +
  
 ===== Storage types ===== ===== Storage types =====
 +  * File level storage
 +    * poll types:
 +      * **directory** shared: **NO**
 +      * **glusterFS** shared: **YES**
 +      * **nfs** shared: **YES**
 +      * **cifs** shared: **YES**
 +    * features
 +      * any POSIX compatible filesystem pointed by path
 +      * no snapshot by FS: VMs with qcow2 are capable to do snapshots
 +      * any content type:
 +      * virtual disk images, containers, templates, ISO images, backup files
 +  * Block level storage
 +    * ''​iscsidirect''​ iSCSI user mode (''​libiscsi2''​)
 +      * content types: images
 +      * format: raw, shared: YES, no snapshots, no clones
 +    * ''​iscsi'' ​
 +      * content types: images, none
 +      * format: raw, shared: YES, no snapshots, no clones
 +    * ''​LVM'' ​
 +      * possible to create on top of iSCSI LUN to get managable disk space
 +      * content types: images, rootdir
 +      * format: raw, shared: YES (iSCSI), no snapshots, no clones
 +    * ''​LVM thin''​
 +      * new thin volume type on top of existing LVM VG
 +      * thin-provisioning
 +      * content types: images, rootdir ​
 +      * format: raw, shared: NO, snapshots, clones
 +    * ''​ZFS over iSCSI''​ - to use **ZFS from remote** system using iSCSI 
 +      * benefits of ZFS:
 +        * for VMs: zfs volume per VM, live snaphots, cloning
 +        * thin provision
 +    * ''​ZFS''​
 +      * local node ZFS
 +      * content types: images, rootdir ​
 +      * format: raw, subvol; shared: NO, snapshots YES, clones YES
 +
 +
 +===== poll type =====
   * Network storage   * Network storage
     * LVM Group on iSCSI     * LVM Group on iSCSI
Line 33: Line 100:
     * ZFS     * ZFS
  
-===== Content types ===== + 
-  * tbd +
-  * +
 ===== Local storage ===== ===== Local storage =====
  
Line 44: Line 110:
  
  
-===== iSCSI ===== 
-Proxmox doc recommends: 
- 
-  iSCSI is a block level type storage, and provides no management interface. ​ 
-  So it is usually best to export one big LUN, and setup LVM on top of that LUN.  
-  You can then use the LVM plugin to manage the storage on that iSCSI LUN. 
- 
-iSCSI target/​direct is capable to store only following content types 
-  * images 
-  * none 
- 
-Options during adding iSCSI storage: 
-  * **Use LUNs directly** - use directly as VM disk without putting LVM volume on it. 
- 
-==== Create LVM on iSCSI ==== 
-  * add iSCSI device - do not select **Use LUNs directly** 
-  * add LVM 
-    * base storage: select just added iSCSI LUN 
-DRAFT 
- 
-    Click 'Add LVM Group' on the Storage list 
-    As storage name use whatever you want but take care, this name cannot be changed later. 
-    For 'Base Storage',​ use the drop down menu to select the previously defined iSCSI target. 
-    For 'Base Volume'​ select a LUN 
-    For '​Volume Group Name' give a unique name (this name cannot be changed later). 
-    Enable shared use (recommended) 
-    Click save 
- 
- 
-DRAFT 
- 
- 
-==== NAS326 CHAP issue ==== 
- 
-NAS326 requires CHAP authentication and initiator user name. 
-There are 2 options to use NAS326: 
-  * disable CHAP on NAS326 
-  * enable CHAP on Proxmox 
- 
-Proxmox initiator name can be found in file: ''/​etc/​iscsi/​initiatorname.iscsi''​ 
- 
-=== disable CHAP on NAS326 === 
- 
-It exposes LUN to everybody in network. Use it only in separate LANs!!! 
- 
-[[https://​wiki.archlinux.org/​index.php/​ISCSI/​LIO#​Disable_Authentication|Disable Authentication]] 
-[[http://​linux-iscsi.org/​wiki/​ISCSI#​Define_access_rights]] 
- 
-It is possible to define common login information for all Endpoints in a TPG: [[http://​linux-iscsi.org/​wiki/​ISCSI#​TPG_authentication|TPG authentication]] 
- 
-How to disable security on NAS326 (enable iSCSI demo mode) 
- 
-  * create the LUN(s) and target via the webgui 
-  * login to your zyxel via ssh as root 
-  * \\ <code bash> 
-targetcli ls 
-targetcli /​iscsi/​iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux/​tpg1/​ get attribute 
-targetcli /​iscsi/​iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux/​tpg1/​ set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1 
-targetcli saveconfig 
-</​code>​ 
- 
- 
-=== use CHAP on Proxmox === 
- 
-Logout and remove all failed trials to connect to NAS326. ​ 
-Especially if IPv6 was enabled on NAS326, proxmox detect two send_targets:​ one for IPv4 and one for IPv6 (not reachable). 
-After disabling IPv6 on NAS326, please delete IPv6 target portal: 
-<code bash> 
-targetcli ls 
-targetcli /​iscsi/​iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux/​tpg1/​portals ls 
-targetcli /​iscsi/​iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux/​tpg1/​portals/​ '​delete fd57::​be99:​11ff:​fe06:​18b0 3260' 
-targetcli saveconfig 
-</​code>​ 
- 
-<code bash> 
-ls /​etc/​iscsi/​nodes 
- 
-# logout 
-iscsiadm -m node -u -T "​iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux"​ --portal 192.168.28.150 
-iscsiadm -m node -u -T "​iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux"​ --portal fd57::​be99:​11ff:​fe06:​18b0 
-# remove 
-iscsiadm -m node -o delete -T "​iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux"​ --portal 192.168.28.150 
-iscsiadm -m node -o delete -T "​iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux"​ --portal fd57::​be99:​11ff:​fe06:​18b0 
-</​code>​ 
- 
-Uncomment and set following config lines: 
-<file conf | /​etc/​iscsi/​iscsid.conf>​ 
-node.session.auth.authmethod = CHAP 
-# get initiator name from /​etc/​iscsi/​initiatorname.iscsi 
-node.session.auth.username = iqn.1993-08.org.debian:​01:​4dad9d97a329 
-node.session.auth.password = my_chap_password_for_NAS326 
-</​file>​ 
- 
-Now discovery should return only one IPv4 target: 
-<code bash> 
-# iscsiadm -m discovery -t sendtargets -p 192.168.28.150 
-192.168.28.150:​3260,​1 iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux 
- 
-# list config options 
-iscsiadm -m node -o show 
- 
-# login 
-iscsiadm -m node --login 
-Logging in to [iface: default, target: iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux,​ portal: 192.168.28.150,​3260] (multiple) 
-Login to [iface: default, target: iqn.2020-04.com.zyxel:​nas326-iscsi-pve1-isos-target.tjlintux,​ portal: 192.168.28.150,​3260] successful. 
- 
-# check new block device 
-cat /​proc/​partitions 
- 
-iscsiadm -m node --logout 
-</​code>​ 
- 
-Now add iSCSI from webui. 
- 
- 
-  ​ 
  
-iSCSI+LVM supports HA and Live Migration of VMs --> mark LVM storage as shared ​ 
  
 ==== storage for ISOs ==== ==== storage for ISOs ====