====== QNAP TS-228 ======
===== HW Spec =====
* 2 x SATA HDD 3,5
* 2 x 1.1 GHz CPU (ARM® v7 1.1 GHz Dual-core)
* 1GB DDR3 RAM
* 1 x Gigabit RJ-45 Ethernet port
* 1 X USB 3.2 Gen 1 (Front)
* 1 X USB 2.0 (Rear)
===== SW Spec =====
* QTS 4.2 (embedded Linux)
* Storage Management
* Single Disk, JBOD, RAID 0, 1
* Bad block scan and hard drive S.M.A.R.T.
* Bad block recovery
* RAID recovery
* Bitmap support
===== Partitions =====
WDC WD20EFRX-68EUZN0 - CMR disc
All partitions on one of RAID disc belongs to RAID:
* 517,50 MiB
* 517,72 MiB
* 1,81 TiB
* 517,72 MiB
* 7,97 GiB
* 10,34 MiB
sdb 8:16 0 1,8T 0 disk
├─sdb1 8:17 0 517,7M 0 part
│ └─md9 9:9 0 517,6M 0 raid1
├─sdb2 8:18 0 517,7M 0 part
│ └─md256 9:256 0 517,7M 0 raid1
├─sdb3 8:19 0 1,8T 0 part
│ └─md1 9:1 0 1,8T 0 raid1
│ ├─vg1-lv544 253:0 0 18,5G 0 lvm
│ └─vg1-lv1 253:1 0 1,8T 0 lvm
├─sdb4 8:20 0 517,7M 0 part
│ └─md13 9:13 0 448,1M 0 raid1
└─sdb5 8:21 0 8G 0 part
└─md322 9:322 0 6,9G 0 raid1
sdc 8:32 0 1,8T 0 disk
├─sdc1 8:33 0 517,7M 0 part
├─sdc2 8:34 0 517,7M 0 part
├─sdc3 8:35 0 1,8T 0 part
│ └─md1 9:1 0 1,8T 0 raid1
│ ├─vg1-lv544 253:0 0 18,5G 0 lvm
│ └─vg1-lv1 253:1 0 1,8T 0 lvm
├─sdc4 8:36 0 517,7M 0 part
└─sdc5 8:37 0 8G 0 part
apt install mdadm lvm2
mdadm scan --assemble --scan
* /dev/md9 - ext4 - 517,62 MiB (157,61 MiB used) - /mnt/HDA_ROOT
* /dev/md256 - linux-swap - 517,69 MiB
* /dev/md1 - LVM2
* '/dev/vg1/lv544' [<18,54 GiB] - zeros!
* '/dev/vg1/lv1' [1,79 TiB] - LUKS aes cbc-plain sha1
* /dev/md13 - ext3 - 448,14MiB (324,98 MiB used) - /mnt/ext
* /dev/md322 - linux-swap - 6,90 GiB
After installing mdadm, OS automatically create some MD devices, but some of them are in read-only state and consist only one RAID member.
To force correct detection again:
mdadm --stop /dev/md0
mdadm --assemble --scan
mdadm --detail /dev/md1
/dev/md1:
Version : 1.0
Creation Time : Fri Apr 7 17:18:29 2017
Raid Level : raid1
Array Size : 1943559616 (1853.52 GiB 1990.21 GB)
Used Dev Size : 1943559616 (1853.52 GiB 1990.21 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Mon Feb 20 13:17:08 2023
State : clean, degraded
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Consistency Policy : resync
Name : 1
UUID : a1fa9f39:73ca1af0:96ad282b:0fb0c934
Events : 1252602
Number Major Minor RaidDevice State
2 8 19 0 spare rebuilding /dev/sdb3
1 8 35 1 active sync /dev/sdc3
Two members are present but one is in ''spare rebuilding'' state and nothing happens.
To start rebuild switch array into RW mode:
mdadm --readwrite /dev/md1
cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb3[2] sdc3[1]
1943559616 blocks super 1.0 [2/1] [_U]
[===============>.....] recovery = 75.3% (1463781952/1943559616) finish=421.1min speed=18985K/sec
Another option to force correct resync - but not used in this case:
mdadm --stop /dev/md1
mdadm --assemble --run --force --update=resync /dev/md1 /dev/sdb3 /dev/sdc3
pvscan
WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
PV /dev/md1 VG vg1 lvm2 [1,81 TiB / 0 free]
Total: 1 [1,81 TiB] / in use: 1 [1,81 TiB] / in no VG: 0 [0 ]
vgdisplay
WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
--- Volume group ---
VG Name vg1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 154
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 1,81 TiB
PE Size 4,00 MiB
Total PE 474501
Alloc PE / Size 474501 / 1,81 TiB
Free PE / Size 0 / 0
VG UUID G3dFff-PS77-XQ1X-a37I-zFT4-qS6L-A4TxSr
lvscan
WARNING: PV /dev/md1 in VG vg1 is using an old PV header, modify the VG to update.
ACTIVE '/dev/vg1/lv544' [<18,54 GiB] inherit
ACTIVE '/dev/vg1/lv1' [1,79 TiB] inherit
===== /dev/vg1/lv1 encrypted =====
Volume is encrypted. It contains standard LUKS header.
Google says encryption password is derived from user supplied password using not know algo.
It points to NAS local utility:
/sbin/storage_util --encrypt_pwd pwd=YOUR_PASSWORD
Encrypted passwd is: …………………………………..
Reference:
* [[https://unix.stackexchange.com/questions/685821/how-to-mount-on-linux-a-qnap-external-drive-with-luks-partition]]
* [[https://www.linux-howto.info/mount-qnap-encrypted-volume/]]
==== Try to run x86 QTS in Proxmox VM ====
Set VGA to ''none''. Add serial port for console - Proxmox console will automatically switch console to serial port.
Import raw image as VM disk: qm importdisk 304 F_TS-X85_20170210-1.3.0_512M.img local-lvm --format raw
Problem: cannot stop at GRUB, it boots kernel, loads initrd and then stuck. Probably some HW dependencies are not met (emmc disk?).
Switched to simpler solution with Docker (see below).
==== storage_util docker ====
Based on files from ''F_TS-X85_20170210-1.3.0_512M'' I've created root filesystem with ''storage_util'' and minimum required libraries.
And it works. Generated password opens LUKS storage.
==== mkpasswd ====
After making Docker image, I found post on QNAP forum that the password can be generated using simple MD5 hash:
mkpasswd -m md5 -S YCCaQNAP
cryptsetup luksOpen /dev/vg1/lv1 nas
mount /dev/mapper/nas /mnt/nas
===== Recovery =====
apt install testdisk
photorec /dev/mapper/nas
===== unmount and shutdown =====
umount /dev/mapper/nas
cryptsetup luksClose nas
dmsetup remove vg1-lv544
dmsetup remove vg1-lv1
mdadm --stop /dev/md1
mdadm --stop /dev/md256
mdadm --stop /dev/md9
mdadm --stop /dev/md322
mdadm --stop /dev/md13