Presentation on theme: "Note: This guide apply to all TS/SS series NAS except TS-401T, TS-411U"— Presentation transcript:
1 Note: This guide apply to all TS/SS series NAS except TS-401T, TS-411U QNAP NAS Data RecoveryCharley, Yufan, AlanNote: This guide apply to all TS/SS series NAS except TS-401T, TS-411U
2 Agenda How does the QNAP NAS RAID work? NAS is OK but cannot access dataraidtab is broken or missingCheck raid settings and configure right raidtabHDD have no partitionsUse parted to recreate the partitionsPartitions have no MD superblockmdadm -CfR --assume-cleanRAID array can't be assembled or status is inactivecheck above and make sure every disks on raid existRAID array can't been mountede2fsck, e2fsck -bAble to mount RAID but data is disappearumount and e2fsck, if not work, try data recoveryRAID is in degraded, read-onlybackup the data then mdadm -CfR, it not work, recreate the RAIDNAS failMount HDD(s) with another QNAP NAS (System Migration)Mount HDD(s) with PC ( R-Studio/ ext3/4 reader) (3rd party tool )Data are deleted by user/administrator accidentally data recovery company, photorec, r-studio/r-linux
3 How does the QNAP NAS RAID work Please check the following link for complete tutorial https://docs.google.com/document/d/1VmIHqIOrBG7s0ymqn46eDK1TmXwCJx685cpWMwF42KA/editAbove guide include all the procedures on our NAS to create a RAID volume. Pre-requirement: download losetup utility to a NAS.After download, use tar -xf to extract and run it. This utility is used to create virtual disk for simulating the disks on above tutorial.
4 Introduction of mdadm command #mdadm -E /dev/sda3 > that will tell if it is md disk #mdadm -Af /dev/md0 /dev/sd[a-d]3 > that will get available md disk into raid array #mdadm -CfR -l5 -n8 --assume-clean /dev/md0 /dev/sd[a-h]3 > that will overwrite the mdstat on each disk> -CfR force to create the raid array > -l5 = raid 5array> -n8 = available md disk> --assume-clean without data partition syncing
5 Introduction of two scripts # config_util Usage: config_util input input=0: Check if any HD existed. input=1: Mirror ROOT partition. input=2: Mirror Swap Space (not yet). input=4: Mirror RFS_EXT partition.>> usually we have config_util 1 to get the md9 ready # storage_boot_init Usage: storage_boot_init phase phase=1: mount ROOT partition. phase=2: mount DATA partition, create storage.conf and refresh disk. phase=3: Create_Disk_Storage_Conf.>> usually we have storage_boot_init 1 to mount the md9
6 RAID Issue - raidtab is broken raidtab is used to check if the disk is in RAID group or single and show the RAID information on web UI.If the disk is in RAID but Web UI show it is single, or the RAID information is different to the actual disk RAID data ( checked by mdadm -E), then the raidtab should be corrupt. Then you need to manually edit the raidtab file to comply the actual RAID status. Check the following slides for raidtab contents
10 raidtab for RAID-5 +global spare and RAID-6 raidtab is same as RAID-5On uLinux.conf, add a line if global spare disk is disk 4:[Storage] GLOBAL_SPARE_DRIVE_4 = TRUERAID-6raiddev /dev/md0 raid-level 6 nr-raid-disks 4 nr-spare-disks 0 chunk-size 4 persistent-superblock 1 device /dev/sda3 raid-disk 0 device /dev/sdb3 raid-disk 1 device /dev/sdc3 raid-disk 2 device /dev/sdd3 raid-disk 3
12 Raid fail - HDDs have no partitions When use the following commands to check the HDD, there is no partition or only one partition. # parted /dev/sdx printThe following is sample.# blkid ** this command show all partitions on the NASNote: fdisk -l cannot show correct partition table for 3TB HDDs
13 RAID fail - HDDs have no partitions (cont.) The following tool (x86 only) can help us to calculate correct partition size according to the HDD size. Please save it in your NAS (x86 model) and make sure the file size is 10,086 bytes. 1. Get every disk size.# cat /sys/block/sda/size2. Get the disk partition list. It should contain 4 partitions if normal.# parted /dev/sda printModel: Seagate ST AS (scsi)Disk /dev/sda: 320GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 32.3kB 543MB 543MB primary ext3 boot 2 543MB 1086MB 543MB primary linux-swap(v1) 3 1086MB 320GB 318GB primary ext3 4 320GB 320GB 510MB primary ext3
14 RAID fail - HDDs have no partitions (cont.) 3. Run the tool in your NAS to get the recover commands.# Create_Partitions /dev/sda/dev/sda sizedisk_size=/usr/sbin/parted /dev/sda -s mkpart primary 40s s/usr/sbin/parted /dev/sda -s mkpart primary s s/usr/sbin/parted /dev/sda -s mkpart primary s s/usr/sbin/parted /dev/sda -s mkpart primary s sIf the disk contains none partition, run the 4 commands.If the disk contains only 1 partition, run the last 3 commands.If the disk contains only 2 partition, run the last 2 commands.If the disk contains only 3 partition, run the last 1 commands.4. Run above partitions commands depends on existing partition number. 5. Check the disk partition after recover. And it should contain 4 partitions now. # parted /dev/sda printModel: Seagate ST AS (scsi)Disk /dev/sda: 320GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 32.3kB 543MB 543MB primary ext3 boot 2 543MB 1086MB 543MB primary linux-swap(v1) 3 1086MB 320GB 318GB primary ext3 4 320GB 320GB 510MB primary ext36. Please then run "sync" or reboot the NAS for the new partition to take effect.
15 RAID fail - Partitions have no md superblock If one or all HDD partitions are lost, or the partitions have no md superblock for unknown reason, use the mdadm -CfR command to recreate the RAID. # mdadm -CfR --assume-clean /dev/md0 -l 5 -n 4 /dev/sda3...Note: Make sure the disk is in correct sequence. Use "mdadm -E" or check raidtab to confirmIf one of the disk is missing or have problem, replace the disk with "missing". For example: # mdadm -CfR --assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 missing /dev/sdc3 /dev/sdd3
16 RAID fail - RAID can't be assembled or status is inactive Check partitions, md superblock statusCheck if there is any RAID disk missing / faulty Use "mdadm -CfR --assume-clean" to recreate the RAID
17 no md0 for array , manually create the md0 with mdadm -CfR
18 RAID fail - Cann't be mounted, status unmount 1. Make sure the raid status is active (more /proc/mdstat)2. try manually mount # mount /dev/md0 /share/MD0_DATA -t ext3 # mount /dev/md0 /share/MD0_DATA -t ext4 # mount /dev/md0 /share/MD0_DATA -o ro (read only)3. use e2fsck / e2fsck_64 to check # e2fsck -ay /dev/md0 (auto and continue with yes)4. If there are many errors when check, memory may not enough, need to create more swap space as the procedure on next slide.
19 RAID fail - Cann't be mount, status unmount (cont.) Use the following command to create more swap space[~] # more /proc/mdstatmd8 : active raid1 sdh2(S) sdg2(S) sdf2(S) sde2(S) sdd2(S) sdc2(S) sdb2 sda2 blocks [2/2] [UU][~] # swapoff /dev/md8[~] # mdadm -S /dev/md8mdadm: stopped /dev/md8[~] # mkswap /dev/sda2Setting up swapspace version 1, size = kBno label, UUID=7194e0a9-be7a-43ac-829f-fd2d55e07d62[~] # mkswap /dev/sdb2no label, UUID=0af8fcdd-8ed1-4fca-8f d86f9474[~] # mkswap /dev/sdc2no label, UUID=f40bd c71-b8ff-9c1e9fbff6bf[~] # mkswap /dev/sdd2no label, UUID=4dad1835-8d88-4cf1-a851-d80a87706fea[~] # swapon /dev/sda2[~] # swapon /dev/sdb2[~] # swapon /dev/sdc2[~] # swapon /dev/sdd2[~] # e2fsck_64 -fy /dev/md0
20 RAID fail - Cann't be mount, status unmount (cont.) If there is no file system superblock or the check fail, you can try backup superblcok. 1. Use the following command to find backup superblock location # /usr/local/sbin/dumpe2fs /dev/md0 | grep superblock Sample output: Primary superblock at 0, Group descriptors at 1-6 Backup superblock at 32768, Group descriptors at Backup superblock at 98304, Group descriptors at2. Now check and repair a Linux file system using alternate superblock # 32768: # e2fsck -b /dev/md0 fsck (12-Jul-2007) e2fsck (12-Jul-2007) /dev/sda2 was not cleanly unmounted, check forced. Pass 1: Checking inodes, blocks, and sizescf Free blocks count wrong for group #241 (32254, counted=32253). Fix? yes /dev/sda2: ***** FILE SYSTEM WAS MODIFIED ***** /dev/sda2: 59586/ files (0.6% non-contiguous), / blocks3. Now try to mount file system using mount command: # mount /dev/md0 /share/MD0_DATA -t ext4
21 RAID fail - able to mount but data disappear If the mount is OK, but data is disappear, unmount the RAID and run e2fsck again (can try backup superblock)If still fail, try data recovery program (photorec, R-Studio) or contact data recovery company
22 RAID fail - RAID is degraded, read-only When degraded, read-only status, there is more disk failure than the raid can support, need to help the user to check which disks are faulty if Web UI isn't helpful - Check klog or dmesg to find the faulty disksAsk user to backup the data firstIf disks looks OK, after backup, try "mdadm -CfR --assume-clean" to recreate the RAIDIf above doesn't work, recreate the RAID
24 NAS fail - Mount HDD(s) with another QNAP NAS User can plug the HDD(s) to another same model name NAS to access the dataUser can plug the HDD(s) to other model name NAS to access the data by perform system migrationnote: TS-101/201/109/209/409/409U series doesn't support system migrationSince the firmware is also stored on the HDD(s), its firmware version may be different to the firmware on NAS. Firmware upgrade may be required required after above operation
25 NAS fail - Access HDD(s) data with windows PC For single or RAID-1 HDDs, user can plug one of the HDD to a PC (by USB, SATA or eSATA) and access the data through 3rd party software (ext2fsd, explore2fs, etc). Check the following for detail.Note: The file/folder name is in unicode(utf8)TS-109/209 use non-standard ext3, need to use QNAP live CD to access the data Live CDFor other RAID configuration, user can use R-studio to mount the RAID and access the data. Check the following link for sample of RAID-0/5 https://docs.google.com/open?id=0B8u8qWRYVhv0ZTk4OTEzYWQtY2ZiOC00NmZjLWE1OWUtNTJhNDE3OGQ5ZDYw
26 NAS cannot boot correctly with HDD installed If NAS cannot boot correctly with HDD installed. With HDD, NAS boot without any problem. This problem could be caused by faulty HDD, IO error on some blocks of HDD or corrupt configuration/system files. If user want to quickly access his data, we can try the following procedures: 1. Power on the NAS without HDD installed2. hot-plug HDDs into the NAS3. Assemble the RAID4. Copy data by winscp or backup to external driveNOTE: arm-based NAS doesn't support sftp if boot without HDD. Have to connect external drive for backup. Find the following slide for procedures to mount a NTFS external drive
27 Mount NTFS/HPFS volume on ARM-based NAS when boot without HDD Following is the procedure to mount NTFS/HPFS volume on an ARM platform NAS without initial HDD.1. Download the following two files to the NAS.2. Put nls_utf8.ko to /lib/modules/others3. Put ufsd.ko to /lib/modules/misc4. insmod nls_utf8.ko and insmod ufsd.ko5. mount -t ufsd /dev/sdya1 /share/esata -o iocharset=utf8,dmask=0000,fmask=0111,forceNOTE: If the disk is larger than 2TB, the 1st partition may be GPT, so we have to mount the 2nd partition.
28 Data are deleted accidentally by user/administrator 1. User delete folders / filesUse photorec/R-studio/data recovery software to recovery data, Check the following link for using R-Studiohttps://docs.google.com/open?id=0B8u8qWRYVhv0ZTk4OTEzYWQtY2ZiOC00NmZjLWE1OWUtNTJhNDE3OGQ5ZDYw2. User remove the RAID volume- see next slide3. User format the RAID volume- User photorec/data recovery software to recover data 4. User perform Restore to Factory Default- It will format RAID and reset all settings, same as 3.5. User remove HDD(s) and cause RAID volume fail- "mdadm -CfR --assume-clean" should work
29 User remove the RAID volume # more /proc/mdstat **Check if the RAID is really removed# mdadm -E /dev/sda3 ** Check if the MD superblock is really removed# mdadm -CfR --assume-clean /dev/md0 -l 5 -n 3 /dev/sda3 /dev/sdb3 /dev/sdc3 **Create the RAID, assume it is 3 HDDs raid-5# e2fsck -y /dev/md0 **check file system, Assume "yes" to all questions. If 64-bit, e2fsck_64# mount /dev/md0 /share/MD0_DATA -t ext4 ** mount the RAID back# vi raidtab ** manually create the raid table# rm /etc/storage.conf ** reflash the web UI volume display# reboot ** Need to add the removed network share(s) back after rebootNote: Only TS-x79, TS-809 series, and D510/D525 models with 5 or more bays support 64-bit commands.