Note: This guide apply to all TS/SS series NAS except TS-401T, TS-411U

Slides:



Advertisements
Similar presentations
Va-scanCopyright 2002, Marchany Unit 8 – Solaris File Systems Randy Marchany VA Tech Computing Center.
Advertisements

ITI-481: Unix Administration Rutgers University Center for Applied Computer Technologies Christopher Uriarte, Instructor Meeting 4.
Linux+ Guide to Linux Certification, Second Edition
Chapter 9 Part III Linux File System Administration
File Management.
Working with Disks and Devices
System Administration Storage Systems. Agenda Storage Devices Partitioning LVM File Systems.
Hands-on RAID on Moxa Computer Prepared by: (40min) Date: mm-dd-yyyy.
TOI - Refresh Upgrades in Cisco Unity Connection 8.6
Basic Unix system administration
Linux Installation LINUX INSTALLATION. Download LINUX Linux Installation To install Red Hat, you will need to download the ISO images (CD Images) of the.
File Systems.
MCDST : Supporting Users and Troubleshooting a Microsoft Windows XP Operating System Chapter 8: Troubleshooting Storage Devices and Display Devices.
1 Web Server Administration Chapter 3 Installing the Server.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment Chapter 12: Managing and Implementing Backups and Disaster Recovery.
Chapter 7: Configuring Disks. 2/24 Objectives Learn about disk and file system configuration in Vista Learn how to manage storage Learn about the additional.
MIS 431 Chapter 61 Chapter 6 – Managing Disk and Data Storage MIS 431 Created Spring 2006.
Chapter 7: Configuring Disks. Configuring File Systems Fat32 –First used with Windows 95 OSR2 –Smaller cluster sizes, more efficient storage up to 32.
Guide To UNIX Using Linux Third Edition
Operating Systems File Systems (Select parts of Ch 11-12)
Laksh mi.  fdisk is an interactive utility to manipulate disk partitions.  Use fdisk –l to review the disks and partitions on the system.  Use fdisk.
Chapter 8: Adding a Disk — Unix Hard Disk Basics Installation and Configuration Barry Kane CMSC-691X.
Session 3 Windows Platform Dina Alkhoudari. Learning Objectives Understanding Server Storage Technologies Direct Attached Storage DAS Network-Attached.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 12: Managing and Implementing Backups and Disaster Recovery.
1 Objectives Discuss the Windows Printer Model and how it is implemented in Windows Server 2008 Install the Print Services components of Windows Server.
Week 10 Project 3: An Introduction to File Systems
GDC Workshop Session 1 - Storage 2003/11. Agenda NAS Quick installation (15 min) Major functions demo (30 min) System recovery (10 min) Disassembly (20.
Troubleshooting Guide for Network Hard Disk. Model - NH-200.
A+ Guide to Managing and Maintaining Your PC Fifth Edition Chapter 8 Understanding and Installing Hard Drives.
LAN / WAN Business Proposal. What is a LAN or WAN? A LAN is a Local Area Network it usually connects all computers in one building or several building.
Using Large Hard Drives in Linux Presented by Kevin McGregor Manitoba UNIX User Group March 12, 2013.
Device and Filesystem Management CSCI N321 – System and Network Administration Copyright © 2000, 2012 by Scott Orr and the Trustees of Indiana University.
Guide to Linux Installation and Administration, 2e 1 Chapter 9 Preparing for Emergencies.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 12: Managing and Implementing Backups and Disaster Recovery.
Module 9: Configuring Storage
Please Note: Information contained in this document is considered LENOVO CONFIDENTIAL For Lenovo Internal Use Only Do Not Copy or Distribute!! For Lenovo.
CHAPTER 8 TROUBLESHOOT LINUX SYSTEM. 8.1 Troubleshoot methodology The maintenance cycle.
4.1 © 2004 Pearson Education, Inc. Exam Managing and Maintaining a Microsoft® Windows® Server 2003 Environment Lesson 4: Organizing a Disk for Data.
11 INSTALLING AND MANAGING STORAGE DEVICES IN WINDOWS XP Chapter 8.
MCTS Guide to Microsoft Windows Vista Chapter 4 Managing Disks.
Chapter 6: Linux Filesystem Administration
Installation Overview Lab#2 1Hanin Abdulrahman. Installing Ubuntu Linux is the process of copying operating system files from a CD, DVD, or USB flash.
1 Interface Two most common types of interfaces –SCSI: Small Computer Systems Interface (servers and high-performance desktops) –IDE/ATA: Integrated Drive.
Managing Disks and Drives Chapter 13 powered by dj.
Chapter 1 Managing Storage. Contents Understanding Partitioning Understanding LVM Understanding RAID Understanding Clustering and GFS Using Access Control.
MCTS Guide to Microsoft Windows 7
1 Objectives Manage and install new file systems.
Sys Admin Course Physical Storage and File Systems Fourie Joubert.
Implementing Hard Drives. Partitioning and Formatting Process.
Windows Server 2003 硬碟管理與磁碟機陣列 林寶森
Device and Filesystem Management CSCI N321 – System and Network Administration Copyright © 2000, 2010 by Scott Orr and the Trustees of Indiana University.
Windows Vista Inside Out Chapter 28 - Chapter 28 - Managing Disks and Drives Last modified
Manage Directories and Files in Linux. 2 Objectives Understand the Filesystem Hierarchy Standard (FHS) Identify File Types in the Linux System Change.
Chapter 11: File System Implementation Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Jan 1, 2005 Implementation.
System Administration – Part 2. Devices in UNIX are files: A device can be accessed with different file names All device files are stored in /dev or its.
PTA Linux Series Copyright Professional Training Academy, CSIS, University of Limerick, 2006 © Workshop V Files and the File System Part B – File System.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems File systems.
Page 1 of 38 Lenovo Confidential Lenovo Confidential Lenovo Confidential Lenovo Confidential Lenovo Confidential Please Note: Information contained in.
MCTS GUIDE TO MICROSOFT WINDOWS 7 Chapter 4 Managing Disks.
Hands-On Microsoft Windows Server 2008 Chapter 7 Configuring and Managing Data Storage.
ITMT Windows 7 Configuration Chapter 4 – Working with Disks and Devices ITMT 1371 – Windows 7 Configuration 1.
Memory management. Linux Memory Management Total memory available for processes = real memory + paging space - 1MB. First megabyte of real memory is used.
Linux Filesystem Administration
Chapter 8 Adding a Disk.
CompTIA Server+ Certification (Exam SK0-004)
Filesystem Management and Backups
Chapter 12: File System Implementation
Introduction to Computers
Operating System Module 1: Linux Installation
Presentation transcript:

Note: This guide apply to all TS/SS series NAS except TS-401T, TS-411U QNAP NAS Data Recovery Charley, Yufan, Alan Note: This guide apply to all TS/SS series NAS except TS-401T, TS-411U

Agenda How does the QNAP NAS RAID work? NAS is OK but cannot access data raidtab is broken or missing Check raid settings and configure right raidtab HDD have no partitions Use parted to recreate the partitions Partitions have no MD superblock mdadm -CfR --assume-clean RAID array can't be assembled or status is inactive check above and make sure every disks on raid exist RAID array can't been mounted e2fsck, e2fsck -b Able to mount RAID but data is disappear umount and e2fsck, if not work, try data recovery RAID is in degraded, read-only backup the data then mdadm -CfR, it not work, recreate the RAID NAS fail Mount HDD(s) with another QNAP NAS (System Migration) Mount HDD(s) with PC ( R-Studio/ ext3/4 reader) (3rd party tool ) Data are deleted by user/administrator accidentally   data recovery company, photorec, r-studio/r-linux

How does the QNAP NAS RAID work Please check the following link for complete tutorial           https://docs.google.com/document/d/1VmIHqIOrBG7s0ymqn46eDK1TmXwCJx685cpWMwF42KA/edit Above guide include all the procedures on our NAS to create a RAID volume.   Pre-requirement: download losetup utility to a NAS.         ftp://csdread:csdread@ftp.qnap.com/NAS/utility/losetup-arm.tar         ftp://csdread:csdread@ftp.qnap.com/NAS/utility/losetup-x86.tar  After download, use tar -xf to extract and run it. This utility is used to create virtual disk for simulating the disks on above tutorial. 

Introduction of mdadm command   #mdadm -E /dev/sda3 > that will tell if it is md disk  #mdadm -Af /dev/md0 /dev/sd[a-d]3 > that will get available md disk into raid array   ------------- ---------------------- #mdadm -CfR -l5 -n8 --assume-clean /dev/md0 /dev/sd[a-h]3 > that will overwrite the mdstat on each disk > -CfR force to create the raid array    > -l5 = raid 5array > -n8 = available md disk > --assume-clean  without data partition syncing

Introduction of two scripts  # config_util Usage: config_util input     input=0: Check if any HD existed.     input=1: Mirror ROOT partition.     input=2: Mirror Swap Space (not yet).     input=4: Mirror RFS_EXT partition. >> usually we have config_util 1 to get the md9 ready   # storage_boot_init Usage: storage_boot_init phase         phase=1: mount ROOT partition.         phase=2: mount DATA partition, create storage.conf and refresh disk.         phase=3: Create_Disk_Storage_Conf. >> usually we have storage_boot_init 1 to mount the md9

RAID Issue - raidtab is broken raidtab is used to check if the disk is in RAID group or single and show the RAID information on web UI. If the disk is in RAID but Web UI show it is single, or the RAID information is different to the actual disk RAID data ( checked by mdadm -E), then the raidtab should be corrupt. Then you need to manually edit the raidtab file to comply the actual RAID status.  Check the following slides for raidtab contents

RAID Issue - raidtab is broken Single No raidtab RAID 0 Stripping raiddev /dev/md0         raid-level      0         nr-raid-disks   2         nr-spare-disks  0         chunk-size      4         persistent-superblock   1         device  /dev/sda3         raid-disk       0         device  /dev/sdb3         raid-disk       1

raidtab for RAID-1 and JBOD RAID-1 Mirror raiddev /dev/md0         raid-level      1         nr-raid-disks   2         nr-spare-disks  0         chunk-size      4         persistent-superblock   1         device  /dev/sda3         raid-disk       0         device  /dev/sdb3         raid-disk       1 JBOD Linear raiddev /dev/md0         raid-level      linear         nr-raid-disks   3         nr-spare-disks  0         chunk-size      4         persistent-superblock   1         device  /dev/sda3         raid-disk       0         device  /dev/sdb3         raid-disk       1         device  /dev/sdc3         raid-disk       2

raidtab for RAID-5 and RAID-5+hot spare raiddev /dev/md0         raid-level      5         nr-raid-disks   3         nr-spare-disks  0         chunk-size      4         persistent-superblock   1         device  /dev/sda3         raid-disk       0         device  /dev/sdb3         raid-disk       1         device  /dev/sdc3         raid-disk       2 RAID-5 + Hot spare raiddev /dev/md0         raid-level      5         nr-raid-disks   3         nr-spare-disks  1         chunk-size      4         persistent-superblock   1         device  /dev/sda3         raid-disk       0         device  /dev/sdb3         raid-disk       1         device  /dev/sdc3         raid-disk       2         device  /dev/sdd3         spare-disk      0

raidtab for RAID-5 +global spare and RAID-6 raidtab is same as RAID-5 On uLinux.conf, add a line if global spare disk is disk 4: [Storage]  GLOBAL_SPARE_DRIVE_4 = TRUE RAID-6 raiddev /dev/md0         raid-level      6         nr-raid-disks   4         nr-spare-disks  0         chunk-size      4         persistent-superblock   1         device  /dev/sda3         raid-disk       0         device  /dev/sdb3         raid-disk       1         device  /dev/sdc3         raid-disk       2         device  /dev/sdd3         raid-disk       3

raidtab for RAID-10 RAID-10 raiddev /dev/md0 raid-level 10         nr-raid-disks   4         nr-spare-disks  0         chunk-size      4         persistent-superblock   1         device  /dev/sda3         raid-disk       0         device  /dev/sdb3         raid-disk       1         device  /dev/sdc3         raid-disk       2         device  /dev/sdd3         raid-disk       3  

Raid fail - HDDs have no partitions When use the following commands to check the HDD, there is no partition or only one partition.  # parted /dev/sdx print The following is sample. # blkid    ** this command show all partitions on the NAS Note: fdisk -l cannot show correct partition table for 3TB HDDs

RAID fail - HDDs have no partitions (cont.) The following tool (x86 only) can help us to calculate correct partition size according to the HDD size. Please save it in your NAS (x86 model) and make sure the file size is 10,086 bytes.  ftp://csdread:csdread@ftp.qnap.com/NAS/utility/Create_Partitions 1. Get every disk size. # cat /sys/block/sda/size 625142448 2. Get the disk partition list. It should contain 4 partitions if normal. # parted /dev/sda print Model: Seagate ST3320620AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number  Start   End     Size   Type     File system     Flags  1      32.3kB  543MB   543MB  primary  ext3            boot  2      543MB   1086MB  543MB  primary  linux-swap(v1)  3      1086MB  320GB   318GB  primary  ext3  4      320GB   320GB   510MB  primary  ext3

RAID fail - HDDs have no partitions (cont.) 3. Run the tool in your NAS to get the recover commands. # Create_Partitions /dev/sda 625142448 /dev/sda size 305245 disk_size=625142448 /usr/sbin/parted /dev/sda -s mkpart primary 40s 1060289s /usr/sbin/parted /dev/sda -s mkpart primary 1060296s 2120579s /usr/sbin/parted /dev/sda -s mkpart primary 2120584s 624125249s /usr/sbin/parted /dev/sda -s mkpart primary 624125256s 625121279s If the disk contains none partition, run the 4 commands. If the disk contains only 1 partition, run the last 3 commands. If the disk contains only 2 partition, run the last 2 commands. If the disk contains only 3 partition, run the last 1 commands. 4. Run above partitions commands depends on existing partition number.  5. Check the disk partition after recover. And it should contain 4 partitions now.  # parted /dev/sda print Model: Seagate ST3320620AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number  Start   End     Size   Type     File system     Flags  1      32.3kB  543MB   543MB  primary  ext3            boot  2      543MB   1086MB  543MB  primary  linux-swap(v1)  3      1086MB  320GB   318GB  primary  ext3  4      320GB   320GB   510MB  primary  ext3 6.  Please then run "sync" or reboot the NAS for the new partition to take effect. 

RAID fail - Partitions have no md superblock If one or all HDD partitions are lost, or the partitions have no md superblock for unknown reason, use the mdadm -CfR command to recreate the RAID.           # mdadm -CfR --assume-clean /dev/md0 -l 5 -n 4 /dev/sda3... Note:  Make sure the disk is in correct sequence.  Use "mdadm -E" or check raidtab to confirm If one of the disk is missing or have problem, replace the disk with "missing".  For example:  # mdadm -CfR --assume-clean /dev/md0 -l 5 -n 4 /dev/sda3 missing /dev/sdc3 /dev/sdd3

RAID fail - RAID can't be assembled or status is inactive Check partitions, md superblock status Check if there is any RAID disk missing / faulty  Use "mdadm -CfR --assume-clean" to recreate the RAID

no md0 for array , manually create the md0 with mdadm -CfR  

RAID fail - Cann't be mounted, status unmount 1. Make sure the raid status is active (more /proc/mdstat) 2. try manually mount     # mount /dev/md0 /share/MD0_DATA -t ext3     # mount /dev/md0 /share/MD0_DATA -t ext4     # mount /dev/md0 /share/MD0_DATA -o ro (read only) 3. use e2fsck / e2fsck_64 to check      # e2fsck -ay /dev/md0 (auto and continue with yes) 4. If there are many errors when check, memory may not enough, need to create more swap space as the procedure on next slide.

RAID fail - Cann't be mount, status unmount (cont.) Use the following command to create more swap space [~] # more /proc/mdstat ....... md8 : active raid1 sdh2[2](S) sdg2[3](S) sdf2[4](S) sde2[5](S) sdd2[6](S) sdc2[7](S) sdb2[1] sda2[0]       530048 blocks [2/2] [UU] .......... [~] # swapoff /dev/md8 [~] # mdadm -S /dev/md8 mdadm: stopped /dev/md8 [~] # mkswap /dev/sda2 Setting up swapspace version 1, size = 542859 kB no label, UUID=7194e0a9-be7a-43ac-829f-fd2d55e07d62 [~] # mkswap /dev/sdb2 no label, UUID=0af8fcdd-8ed1-4fca-8f53-0349d86f9474 [~] # mkswap /dev/sdc2 no label, UUID=f40bd836-3798-4c71-b8ff-9c1e9fbff6bf [~] # mkswap /dev/sdd2 no label, UUID=4dad1835-8d88-4cf1-a851-d80a87706fea [~] # swapon /dev/sda2 [~] # swapon /dev/sdb2 [~] # swapon /dev/sdc2 [~] # swapon /dev/sdd2 [~] # e2fsck_64 -fy /dev/md0

RAID fail - Cann't be mount, status unmount (cont.) If there is no file system superblock or the check fail, you can try backup superblcok.  1. Use the following command to find backup superblock location     # /usr/local/sbin/dumpe2fs /dev/md0 | grep superblock     Sample output:     Primary superblock at 0, Group descriptors at 1-6     Backup superblock at 32768, Group descriptors at 32769-32774     Backup superblock at 98304, Group descriptors at 98305-98310     ..163840...229376...294912...819200...884736...1605632...2654208...4096000... 7962624... 11239424... 20480000...       23887872...71663616...78675968..102400000..21499084 8..512000000...550731776...644972544 2. Now check and repair a Linux file system using alternate superblock # 32768:     # e2fsck -b 32768 /dev/md0     fsck 1.40.2 (12-Jul-2007)     e2fsck 1.40.2 (12-Jul-2007)     /dev/sda2 was not cleanly unmounted, check forced.     Pass 1: Checking inodes, blocks, and sizescf     .......     Free blocks count wrong for group #241 (32254, counted=32253).     Fix? yes     .........     /dev/sda2: ***** FILE SYSTEM WAS MODIFIED *****     /dev/sda2: 59586/30539776 files (0.6% non-contiguous), 3604682/61059048 blocks 3. Now try to mount file system using mount command:     # mount /dev/md0 /share/MD0_DATA -t ext4

RAID fail - able to mount but data disappear If the mount is OK, but data is disappear, unmount the RAID and run e2fsck again (can try backup superblock) If still fail, try data recovery program (photorec, R-Studio) or contact data recovery company

RAID fail - RAID is degraded, read-only When degraded, read-only status, there is more disk failure than the raid can support, need to help the user to check which disks are faulty if Web UI isn't helpful         - Check klog or dmesg to find the faulty disks Ask user to backup the data first If disks looks OK, after backup, try "mdadm -CfR --assume-clean" to recreate the RAID If above doesn't work, recreate the RAID

Degraded mode (read only) Failed drive (X)  

NAS fail - Mount HDD(s) with another QNAP NAS User can plug the HDD(s) to another same model name NAS to access the data User can plug the HDD(s) to other model name NAS to access the data by perform system migration http://docs.qnap.com/nas/en/index.html?system_migration.htm note: TS-101/201/109/209/409/409U series doesn't support system migration Since the firmware is also stored on the HDD(s), its firmware version may be different to the firmware on NAS.  Firmware upgrade may be required required after above operation

NAS fail - Access HDD(s) data with windows PC For single or RAID-1 HDDs, user can plug one of the HDD to a PC (by USB, SATA or eSATA) and access the data through 3rd party software (ext2fsd, explore2fs, etc).  Check the following for detail. http://www.soluvas.com/read-browse-explore-open-ext2-ext3-ext4-partition-filesystem-from-windows-7/ Note: The file/folder name is in unicode(utf8) TS-109/209 use non-standard ext3, need to use QNAP live CD to access the data Procedures: ftp://csdread:csdread@ftp.qnap.com/NAS/live_cd/TS109-209_data_recovery_with_Live_CD.pdf  Live CD ISO:  ftp://csdread:csdread@ftp.qnap.com/NAS/live_cd/Data_Recover_live-cd_2009-01-15_TS109-209.iso For other RAID configuration, user can use R-studio to mount the RAID and access the data. Check the following link for sample of RAID-0/5         https://docs.google.com/open?id=0B8u8qWRYVhv0ZTk4OTEzYWQtY2ZiOC00NmZjLWE1OWUtNTJhNDE3OGQ5ZDYw

NAS cannot boot correctly with HDD installed If NAS cannot boot correctly with HDD installed.  With HDD, NAS boot without any problem.  This problem could be caused by faulty HDD, IO error on some blocks of HDD or corrupt configuration/system files.   If user want to quickly access his data, we can try the following procedures:  1. Power on the NAS without HDD installed 2. hot-plug HDDs into the NAS 3. Assemble the RAID 4. Copy data by winscp or backup to external drive NOTE: arm-based NAS doesn't support sftp if boot without HDD.  Have to connect external drive for backup.  Find the following slide for procedures to mount a NTFS external drive

Mount NTFS/HPFS volume on ARM-based NAS when boot without HDD Following is the procedure to mount NTFS/HPFS volume on an ARM platform NAS without initial HDD. 1. Download the following two files to the NAS. ftp://csdread:csdread@ftp.qnap.com/NAS/temp/nls_utf8.ko  ftp://csdread:csdread@ftp.qnap.com/NAS/temp/ufsd.ko  2. Put nls_utf8.ko to /lib/modules/others 3. Put ufsd.ko to /lib/modules/misc 4. insmod nls_utf8.ko and insmod ufsd.ko 5. mount -t ufsd /dev/sdya1 /share/esata -o iocharset=utf8,dmask=0000,fmask=0111,force NOTE: If the disk is larger than 2TB, the 1st partition may be GPT, so we have to mount the 2nd partition.

Data are deleted accidentally by user/administrator 1. User delete folders / files Use photorec/R-studio/data recovery software to recovery data, Check the following link for using R-Studio https://docs.google.com/open?id=0B8u8qWRYVhv0ZTk4OTEzYWQtY2ZiOC00NmZjLWE1OWUtNTJhNDE3OGQ5ZDYw 2. User remove the RAID volume - see next slide 3. User format the RAID volume - User photorec/data recovery software to recover data  4. User perform Restore to Factory Default - It will format RAID and reset all settings, same as 3. 5. User remove HDD(s) and cause RAID volume fail - "mdadm -CfR --assume-clean" should work

User remove the RAID volume # more /proc/mdstat        **Check if the RAID is really removed # mdadm -E /dev/sda3        ** Check if the MD superblock is really removed # mdadm -CfR --assume-clean /dev/md0 -l 5 -n 3 /dev/sda3 /dev/sdb3 /dev/sdc3        **Create the RAID, assume it is 3 HDDs raid-5 # e2fsck -y /dev/md0           **check file system, Assume "yes" to all questions.  If 64-bit, e2fsck_64 # mount /dev/md0 /share/MD0_DATA -t ext4         ** mount the RAID back # vi raidtab        ** manually create the raid table # rm /etc/storage.conf     ** reflash the web UI volume display # reboot     ** Need to add the removed network share(s) back after reboot Note: Only TS-x79, TS-809 series, and D510/D525 models with 5 or more bays support 64-bit commands.