Presentation is loading. Please wait.

Presentation is loading. Please wait.

[Storage] Version 1.2 1.

Similar presentations


Presentation on theme: "[Storage] Version 1.2 1."— Presentation transcript:

1 [Storage] Version 1.2 1

2 Course Outline D-Link NAS (Network Attached Storage)
Introduction to Network Storage RAID Technologies Storage Essentials Basic Terminologies and Concepts Hard Drive Interface Technologies SAN Technologies Fiber Channel Technology iSCSI Technology D-Link SAN (Storage Area Network) D-Link Products for Storage Area Network Market Analysis for D-Link SAM Products D-Link SAN Implementation SAN Product Features Overview Volume Management Device Management iSCSI Features Volume and RAID Support D-Link NAS (Network Attached Storage) D-Link Products for Network Attached Storage Market Analysis for D-Link NAS Products NAS Product Features Overview Managing the Device User and Group Management Appliance Servers Network Features USB Port Applications Applications and Solutions for Network Storage NAS Applications SAN Applications

3 Introduction to Network Storage

4 Introduction to Network Storage
After this section, you should gain more knowledge of the following: Types of current storage solutions for computerized devices Characteristics of DAS and the challenges of using it Characteristics of NAS and the benefits/advantages that it offers Characteristics of SAN and the benefits/advantages that it offers Differences among each storage solution

5 Evolutions of Storage Technology
Introduction to Network Storage Storage Evolutions Evolutions of Storage Technology 1963 1982 1940 1951 1956 1962 1970 1978 1981 1984 1940s – Data was mostly stored on punched card and punched paper tape. 1951 – First computer to use magnetic tape for storage. 1956 – IBM introduced the first commercial hard disk drive known as RAMAC (Random Access Method of Accounting and Control). 1962 – The laser diode was invented by IBM which became the fundamental technology for read-write optical storage devices. 1963 – IBM introduced the first storage unit with removable disks. This became an end for punched-cards era. 1970 – Portable storage was born with the invention of the floppy disk. 1978 – The first patent for RAID technology was filed. 1981 – The Intelligent interface for disk drive “SASI” was developed by Shugart Associates and NCR Corporation. This interface is the predecessor to SCSI interface. 1982 – SCSI interface was born and developed based on its predecessor, SASI. 1984 – Compaq and Western Digital Co. produced ST506 controller that was able to be mounted on the hard disk drive and connected to the PC using a 40-pin cable. Page is Animated

6 Evolutions of Storage Technology (cont’d)
Introduction to Network Storage Storage Evolutions Evolutions of Storage Technology (cont’d) 1986 2001 1985 1994 1996 1998 2000 2003 1985 – First IDE drive was built by integrating ST506 controller in the hard disk drive. 1986 – SCI specification was defined in a ANSI standard X 1994 – SCSI-2 became an ANSI standard X and the IDE standard was approved by the ANSI under the name ANSI X 1996 – The ATA-2 interface that complied with the ANSI X standard was the AT Attachment Interface with Extensions, and the ATA-2 interface that complied with the ANSI X standard was the AT Attachment Interface with Extensions. 1998 – The ATA/ATAPI-4 interface that complied with the ANSO NCITS was the AT Attachment Interface with Packet Interface Extension. 2000 – The ATA/ATAPI-5 interface that complied with the ANSI NCITS was the AT Attachment Interface with Packet Interface-5. 2000 – The Serial ATA 1.0 Working Group was established to specify Serial ATA for desktop applications. 2001 – Serial ATA 1.0 was released in August of 2001 (with subsequent revisions 1.0a and 1.1) which provided significant improvement over parallel ATA. 2003 – Hitachi bought IBM Data Storage Division. Page is Animated

7 Types of Storage Solution
Introduction to Network Storage Storage Solutions Types of Storage Solution Internal Storage Memory (DDR) IDE ATA Hard Disk / Optical Compact Disk SCSI Hard Disk SATA Hard Disk External Storage Direct Attached Storage (DAS) Network Storage Network Attached Storage (NAS) Storage Area Network (SAN) USB Storage Enclosure Firewire 1394 Storage Enclosure Slim Disk Memory Storage can be differentiated into two major types: Internal Storage Internal storage refers to storage media built inside a client device and is attached directly to the backplane of the client device (Computer, notebook/ laptop, etc). Examples for internal storage media are Internal Hard Disks (IDE/PATA, SATA, SCSI), and DDR Memory. Internal hard disks, such as IDE/PATA, SATA, and SCSI HDD, is often considered as a very basic example of Direct Attached Storage (DAS) where the storage media is directly attached to the client device. However, a most of the time the idea of internal storage is that of a storage media built inside a client device, while most DAS refer to an external storage enclosure directly attached to the client side. If based on how a storage media is connected (Directly/ indirectly attached), the internal hard disks are a type of DAS. But if based on location of the storage media, the hard disk will still be a type of internal storage. External Storage External storage refers to storage media put outside the client device and usually is an independent (external) storage enclosure. When based on how the storage enclosure is connected to the client side, External Storage can be separated into more categories, which may include: Direct Attached Storage (DAS) Network Storage USB Storage Enclosure (Portable Hard Disk, USB Flash Disk/ Thumb Drive) Firewire 1394 Storage Enclosure Slim Disk Memory Note: The DAS that we are referring to in this material is more to an independent external storage enclosure and may not be suitable for internal storage hard disk explanation.

8 Direct Attached Storage (DAS)
Introduction to Network Storage Storage Solutions Direct Attached Storage (DAS) A storage system directly attached to a client (commonly to a computer or server), without a storage network in between. Common example of DAS would be a storage enclosure externally attached to a server, where clients in the network must access the server in order to connect to the storage device. Client Local Area Network Oracle Database Server File Server Active Directory Server Host Bus Adapter DAS #1 DAS #2 DAS #3 DAS #4 Network Application Server DAS is the most basic level of storage solution in which storage devices are part of the computer, as with drives; or directly attached to a single server, as with RAID arrays, or tape libraries. Clients in the network must therefore access the server in order to connect to the storage device. DAS is ideal for localized file sharing in small environments with a single server or few servers, such as small businesses or departments; or workgroups that do not need to share the data resources across the entire (enterprise) network.

9 Introduction to Network Storage
Storage Solutions Challenges of DAS Difficulty managing servers and storage with slow backup causing heavy LAN congestion Limited number of drives supported Limitation on storage size Inability to share storage across multiple servers Time-consuming and complex backup and management Need for storage down time (off-line) when installing additional drives Direct Attached storage is generally how most SMBs start out, since servers have drives built in. As the need for increased storage arises, additional hard drives will often be installed directly into the servers. There are a number of problems with this approach, the largest of which is that the server generally has to be taken off-line while the new drives are being installed. In addition, there is a limit to the number of drives that can be supported by a given server. While Direct Attached Storage arrays and servers with RAID support are available, they are more expensive than standard servers, and still have limitations on overall storage size, ability to share storage across multiple servers, and are time-consuming to manage and backup. With Direct Attached Storage, managing backup is quite difficult. Storage devices are distributed throughout the company, often built into servers and workstations/PCs with different operating systems and usage requirements, making it nearly impossible to create a reliable, automated backup solution. Another major disadvantage of Direct Attached Storage is the difficulty in utilizing the storage efficiently across multiple servers and users. Drives added to one server are generally not easily available to other servers, so as a company’s storage needs grow, management gets increasingly complicated. If a user whose account is on Server1 needs additional storage space, they may not be able to be assigned unused space on Server2 without moving their account. This problem is solved with storage virtualization provided by a network storage.

10 Solution for DAS DAS  Network Storage Introduction to Network Storage
Storage Solutions Solution for DAS Simplify storage management by separating the data from application server. DAS  Network Storage

11 Why Do We Need Network Storage?
Introduction to Network Storage Storage Solutions Why Do We Need Network Storage? Volume of data keeps growing exponentially Redundancy and backup necessity Data availability and accessibility Storage consolidation for centralized management* Increase reliability and better performance (speed) Storage virtualization* Overall cost reduction Data Protection * Unique characteristics possessed by SAN only. Advantages of network storage as compared to DAS are as the following: Effective utilization of storage resources through centralized access Simplified, centralized management of storage which reduces administrative workload Increased flexibility and scalability Improved throughput performance to shorten data backup and recovery time Non-disruptive business operations when you add or redeploy storage resources Higher data availability for business continuance through a resilient network design

12 Network Attached Storage (NAS) Overview
Introduction to Network Storage Storage Solutions Network Attached Storage (NAS) Overview NAS is a file-level computer data storage device connected to a computer network providing data access to heterogeneous network clients. A NAS unit is essentially a self-contained computer connected to a network, with the sole purpose of supplying file-based data storage services to other devices on the network. NAS are usually accessed by workstations and servers through a network protocol such as TCP/IP and applications such as Network File System (NFS) or Common Internet File System (CIFS) / Server Message Block (SMB) for file access. Client Application Server File Server Public Local Area Network NAS NAS is a dedicated storage server based on client-server design, just like a file server with storage internally attached to it. NAS can be analogous to a computer but without a monitor, keyboard or mouse. It has its own embedded operating system. One or more drives can be attached to NAS systems to increase its total capacity, but clients will always connect to the NAS box/head, rather than to the individual disk. NAS provides file-sharing to clients and servers in a mix/heterogeneous environment. With DAS, each server is running its own operating platform, so there is no common storage in an environment that may include a mix of Windows, Mac and Linux workstations. NAS systems can integrate into any environment and serve files across all operating platforms. Unlike SAN which connect to a Fiber Channel network, NAS enclosures connect to a TCP/IP network which also include servers and workstation clients. NAS solutions are typically configured as file-serving appliances accessed by workstations and servers through a network protocol such as TCP/IP and applications such as Network File System (NFS) or Common Internet File System (CIFS) for file access. NAS storage scalability is often limited by the size of the self-contained NAS appliance enclosure. Adding another appliance is relatively easy, but sharing the combined contents is not. Because of these constraints, data backups in NAS environments typically are not centralized, and therefore are limited to direct attached devices (such as dedicated tape drives or libraries) or a network-based strategy where the appliance data is backed up to facilities over a corporate or dedicated LAN. Increasingly, NAS appliances are using SANs to solve problems associated with storage expansion, as well as data backup and recovery. NAS does work well for organizations needing to deliver file data to multiple clients over a network. Because most NAS requests are for smaller amounts of data, data can be transferred over long distances efficiently.

13 Storage Area Network (SAN) Overview
Introduction to Network Storage Storage Solutions Storage Area Network (SAN) Overview A high performance storage network that transfers block-level data between servers and storage devices, separate from the local area network (LAN) traffic. In a SAN environment, storage devices, such as DAS, RAID arrays, or tape libraries are connected to servers using fiber channel or iSCSI. Characteristics of SAN: Virtualization Storage Consolidation Scalable Block data transfer uses encapsulated SCSI Application Server File Server SAN High performance private storage network Client Public Local Area Network SAN is a high performance storage network which transfers block-level data between servers and storage devices, separate from the public local area network traffic. In a SAN environment, storage devices, such as DAS, RAID Arrays, or tape libraries are connected to servers using fiber channel or iSCSI methods. The unique characteristic of SAN is that it moves large blocks of data rather than on the file level (i.e. on as file basis). Characteristics of SAN: Virtualization – refers to the process of grouping together independent storage devices found across a network to create what seems (to the user) to be a single large storage entity that can be centrally managed. Storage Consolidation – Businesses that seek to move beyond Direct Attached Storage (DAS) and are looking for the benefits offered by SAN will appreciate the xStack Storage solution and its ability to support multiple servers and efficiently pool storage. IP interfaces can be tied together using existing fast Ethernet equipment. This reduces costs related to equipment and staff in comparison with direct-attached storage. Companies can also better utilize storage capacity by pooling more servers together in the storage network. Scalable – Additional storage enclosures can be added to the SAN to increase the overall storage capacity.

14 Differences of NAS and SAN
Introduction to Network Storage Storage Solutions Differences of NAS and SAN Network Attached Storage (NAS) Storage Area Network (SAN) Clients sees the NAS box as an independent device (as a file server), thus the architecture is client-server based where client requests are sent directly to the NAS. Client sees the SAN as a part of a server (the SAN is connected behind the server in its own network), thus client should send the request to server connected to the SAN. Clients connect to a NAS and share files through the use of NFS, CIFS/SMB, or HTTP protocol. Clients connected to the SAN through the use of iSCSI or Fiber Channel, depending on which is supported by the SAN. File-based data transfer (data is identified by file name and other parameters, such as the file meta-data (file’s owner, permissions, etc) Block-level data transfer along long distances (data is addressed by disk block number and without file system formatting). Backups and mirrors are done on files, not blocks, which provides savings in bandwidth and time. Backups and mirrors require a block by block copy, even if blocks are empty. A mirror machine must be equal to or greater in capacity than the source volume.

15 Comparison for each of the Storage Solutions
Introduction to Network Storage Storage Solutions Comparison for each of the Storage Solutions DAS Enclosure NAS Enclosure SAN Enclosure Directly connected to a client Connected to servers and workstations via a pubic network Connected to servers over the private storage network Slower data access compared to network storage Fast data access (depends on the LAN speed) Fastest data access (depends on which protocol is used) Direct data transfer File level data transfer Block level data transfer Data transfer using SCSI protocol Data transfer using NFS / CIFS / SMB protocol Fiber Channel or iSCSI is used for data transfer protocol Application Server File SAN Appliance High performance private storage network Client Public LAN

16 Summary: Introduction to Network Storage
Clients can choose from three types of storage systems to keep their data on: Direct Attached Storage (DAS), Network Attached Storage (NAS), and Storage Area Network (SAN). Direct Attached Storage (DAS) is the most commonly used data storage solution for end user level client devices (computers, servers). It attaches the storage enclosure directly to the client device. Network Attached Storage (NAS) is mainly targeted for home and SMB users, and offers the benefits of network storage with ease of sharing files and centralized data storage over the IP network. Storage Area Network (SAN) is mainly targeted for Server Farms or Special Applications, e.g. IP Surveillance, and offers high performance network storage solutions for data transfers over enterprise network, with benefits include virtualization, storage consolidation, etc. D-Link supports data transfer over the iSCSI protocol for SAN devices.

17 Questions and Answers: Introduction to Network Storage
What is the characteristic of Direct Attached Storage? Storage is connected to the server without being separated with TCP/IP network Storage consolidation capability Data transfer using Network File System (NFS) protocol Link multiple storage repositories to multiple clients and servers What is the characteristic of D-Link Network Attached Storage? Provide slow data access Block data transfer along long distance is possible Data transfer using CIFS/SMB protocol Support server virtualization What are the characteristics of D-Link Storage Area Network? (Choose Two) File-level data transfer along long distance Storage is connected directly to the server using iSCSI protocol Block data transfer Support storage virtualization and consolidation A C C, D

18 RAID Technologies

19 RAID Technologies RAID Technologies After this section, you should gain more knowledge of the following: RAID mechanisms overview RAID types supported by D-Link network storage appliances Characteristics of each RAID type supported by D-Link as well as the advantages and disadvantages for each (RAID 0, RAID 1, RAID 5, RAID 10, and JBOD)

20 RAID Technology Overview
RAID Technologies Introduction to RAID RAID Technology Overview Redundant Arrays of Independent Disks (RAID) is a data storage mechanism for dividing and/or replicating data over multiple hard drives, thus which may provide better performance, reliability, and/or larger data volume sizes. Depending on the type of RAID applied, different benefits can be achieved. D-Link network storage supports several RAID technologies as described below: RAID Level Type Definition Redundant Striped RAID 0 Distributes each block of data among several drives to improve the speed of access No Yes RAID 1 Mirrored Two copies of all data are written to independent disks RAID 10 Mirrored Striped Stripes the data among several drives and then mirrors the data to another set of disks RAID 5 Parity Distributes one copy of the data among several drives and adds parity blocks spread throughout the volume to protect against the loss of any single drive N/A JBOD All the disks are grouped together to form one large volume. The data is written to the disks in sequential order

21 RAID 0 Technology Overview
RAID Technologies RAID 0 RAID 0 Technology Overview Characteristics of RAID 0 RAID 0 works by striping the data (Data-striping) across the hard drives At least two hard drives must be provided Improved performance (high speed data transfer) No fault-tolerance No error-checking Advantages and disadvantages Advantages Disadvantages Speed enhancement and improve I/O performance Maximum utilization of storage capacity* Very simple design and easy to implement No data redundancy or fault-tolerance Failure occurring in any disk of an array will result in all data in that array being lost * Each physical disk must be of the same capacity to achieve 100% storage capacity utilization Characteristics of RAID 0: Minimum 2 hard drives is needed for RAID 0. Improved performance with high speed data transfer The greater the number of disks provided in an array, the higher the bandwidth and the faster data transfer rate. I/O performance is also greatly improved by spreading the I/O load across many channels and drives No additional overhead, such as parity calculation, which can cause lower performance Storage capacity In RAID 0, the total storage capacity is equal to the sum of the storage capacity of all the disks in the RAID 0 array group. This means that if you have two disks in a RAID 0 array, with the size of 80GB for each disk, then the total storage capacity in the RAID 0 array available to store data is 160GB. Notice that RAID 0 can be created with disks of different sizes, but if the capacity for each disk in the RAID 0 array is different, then the total storage capacity available for that array equals to the number of available disk in the array multiplied by the smallest sized disk in that array. For example, if a 120 GB disk is striped together with a 100GB disk, the size of the array will be 200GB (Number of available disk * smallest size of the disks = 2 * 100GB). No fault-tolerance If a failure occurs in any of the disks in an array, the entire array is destroyed which will result in data loss. This is due to the fact that the data is distributed in equally sized blocks to all the drives in the array, and therefore is no data redundancy or data backup on RAID 0, unless data backup is manually configured by the administrator. No error-checking RAID 0 does not implement any error-checking so data error is unrecoverable. When RAID 0 is suitable? RAID 0 is recommended for deployment in an environment where data transfer is a priority but downtime because of disk failure is not a big issue. An example of a recommended application would be for video production or editing, multimedia applications, or all applications requiring high bandwidth. RAID 0 should NEVER be used in a mission critical environment, where fault-tolerance becomes a very important issue.

22 ✕ ✕ Illustration of RAID 0 RAID Technologies
Data 1 2 3 4 5 6 Primary Disk Disk-0 Disk-1 Disk 0 Network Storage 1 2 3 4 Disk 1 5 6 If RAID 0 is in use and one of the disks in the array crashes, the rest of disks in the array will also not work. This will result is total data loss. Page is Animated

23 RAID 1 Technology Overview
RAID Technologies RAID 1 RAID 1 Technology Overview Characteristics of RAID 1 RAID 1 works by mirroring the data At least two hard drives must be provided Fault-tolerance Advantages and disadvantages Advantages Disadvantages 100% data redundancy Highest disk overhead of all RAID types Inefficient because only 50% of the physical drive storage’s capacity is used Characteristics of RAID 1: Minimum of two hard drives or even number of disks must be provided in order to do data mirroring. Fault-tolerance RAID 1 provides fault-tolerance from disk errors and failures of any drives in the array. When RAID 1 is suitable? RAID 1 is recommended for environments which use applications that require high availability and immediate access to the disk is still possible if any disk failure occurs. Those applications are, such as, financial related applications (accounting, payroll, taxation, etc).

24 ✕ Illustration of RAID 1 RAID Technologies
If RAID 1 is in use and the primary disk crashes, the mirrored disk will automatically replace the primary disk. Disk-0 Disk-1 1 1 2 2 3 3 Network Storage Primary Disk 4 4 Mirrored Disk 100% Redundancy!!! Page is Animated

25 RAID 5 Technology Overview
RAID Technologies RAID 5 RAID 5 Technology Overview Characteristics of RAID 5 technology: Striped set with distributed parity Minimum three disks must be provided to implement RAID 5 Offers data protection and increases throughput Advantages and Disadvantages Advantages Disadvantages 100% data protection Offer more physical drive storage capacity than RAID 1 Highest read data transaction rate Distributing the parity over all of the disks rather than putting all the parity on one disk Extra time needed to calculate the parity Disk failure has a medium impact on throughput Difficult to rebuild volume in the event of a disk failure (as compared to RAID level 1) Striped set with distributed parity. Distributed parity requires all drives but one to be present to operate. Drive failure requires replacement, but the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. The array will have data loss in the event of a second drive failure and is vulnerable until the data that was on the failed drive is rebuilt onto a replacement drive. Parity is a calculated value that can be used to reconstruct data after a failure. While data is being written to a RAID-5 volume, parity is calculated by performing an exclusive OR (XOR) procedure on the data. The resulting parity is then written to the volume. If a portion of a RAID-5 volume fails, the data that was on that portion of the failed volume can be recreated from the remaining data and the parity. When RAID 5 is suitable? When users require acceptable tradeoffs between availability, capacity, data protection and performance when compared to other RAID configurations, RAID 5 is the best solution that provides those advantages. RAID 5 provides acceptable levels of data protection, disk utilization and performance for most applications. With RAID 5, users can enjoy high productivity when doing performance-demanding tasks and use it as a repository of precious artwork and digital assets. Recommended application for RAID 5 deployment are, for example, File and Application servers, Database servers, Web, , and News servers, Intranet servers. Mostly, the above applications require a balance of availability, capacity, data protection and performance.

26 Data is fully recovered!!!
RAID Technologies RAID 5 Illustration of RAID 5 Data to be written: Using RAID 5, if one of the disks in the array fails, data in the failed disk can be recovered Data is fully recovered!!! Disk-2 fails, data cannot be accessed!!! New Disk to replace the failed disk Disk-0 Disk-1 Disk-2 1 P=1 (1 XOR 0) 1 XOR 0 = 1 1 P=0 (1 XOR 1) 1 1 XOR 0 = 1 P=1 (1 XOR 0) 1 1 XOR 1 = 0 P=1 (0 XOR 1) 1 Each parity volume in the RAID 5 configuration is produced from the XOR calculation. XOR calculation compares two binary digits and calculates the result from the comparison. The result will be as follows from any given two bits: 1 XOR 1 = 0 1 XOR 0 = 1 0 XOR 0 = 0 0 XOR 1 = 1 In a RAID 5 implementation, bit per bit of data on one disk will be compared to each bit of data on the next available disk. The result, which is the parity data, will be written to the defined space in the other disk. The way RAID 5 writes the data is in the distributed/striped manner as illustrated above. From the above illustration, when any drive in an array fails, data in the failed drive can be rebuilt through the XOR calculation process. 1 XOR 0 = 1 1 P=1 (1 XOR 0) 1 XOR 0 = 1 1 P=1 (1 XOR 0) 1 XOR 1 = 0 Rebuilt process started! Data can be rebuilt to the new disk using XOR calculations by recalculating the two bits retrieved from the existing drives P: parity Page is Animated

27 RAID 10 Technology Overview
RAID Technologies RAID 10 RAID 10 Technology Overview Characteristics of RAID 10 technology: RAID 10 provides mirroring and striping at the same time Minimum four disks or even number of disks is required Provides fault-tolerance and improves performance Advantages and Disadvantages Advantages Disadvantages Provide fault tolerance to prevent data loss Provide high performance for I/O operation (read and write) Expensive, many disks are required to implement this RAID technology Only 50% of the physical drive storage’s capacity is used, if implements mirroring mechanism Characteristics of RAID 10 technology: RAID 10: The volume is first mirrored, and then both mirrors are striped. This provides fault tolerance and improves performance but increases complexity. The key difference from RAID 0+1 is that RAID 10 creates a striped set from a series of mirrored drives. In a failed disk situation, RAID 10 performs better because all the remaining disks continue to be used. The array can sustain multiple drive losses so long as no mirror loses both its drives. When RAID 10 is suitable? RAID 10 is an excellent solution for sites that would have otherwise gone with RAID 1 but need some additional performance boost. This RAID type is highly recommended for applications that require high performance and fault tolerance.

28 Very high reliability combined with high performance!!!
RAID Technologies RAID 10 Illustration of RAID 10 RAID 0 - Stripe 5 Disk-0 3 1 Very high reliability combined with high performance!!! RAID 1 - Mirror Disk-1 6 Disk-2 4 2 Disk-3 Characteristic RAID 10 combines RAID 1 and RAID 0 together. Its implementation first mirrors the data from one disk to another disk, then it stripes the data to multiple disk drives in an array. RAID 10 mirrors data across half of the disk drives in an array (which is the first set of disk drives), while on the other half of the array, the data is then striped across the rest of the remaining disk drives in the RAID 10 configuration. Fault-Tolerance By combining the features of RAID 0 and RAID 1, RAID 10 provides robust fault tolerance. Access to data is preserved if one disk in each mirrored pair remains available. Referring to the above diagram, for example, if Disk-0 fails, the group will still work properly and be able to respond to read/write requests from the client. The same condition applies if Disk-2 fails, but if Disk-1 or Disk-3 fails, the data will be lost and will be unrecoverable because there is no backup data left. Performance RAID 10 performance is similar to the performance of RAID 0 while providing disk redundancy and at higher performance if compared to RAID 1.

29 JBOD Technology Overview
RAID Technologies JBOD JBOD Technology Overview Characteristics of JBOD (Just a Bunch Of Disks): No Data redundancy, which means no fault-tolerance Bigger array capacity Two or more hard disks are required to create one logical drive Advantages and Disadvantages Advantages Disadvantages Provide 100% storage capacity utilization No data redundancy or fault-tolerance provided As the name implies, disks are merely concatenated together, end to beginning, so they appear to be a single large disk. This mode is sometimes called JBOD, or "Just a Bunch Of Disks". Concatenation may be thought of as the reverse of partitioning. While partitioning takes one physical drive and creates two or more logical drives, JBOD uses two or more physical drives to create one logical drive. As it consists of an array of independent disks, it can be thought of as a distant relation to RAID. Concatenation is sometimes used for turning several odd-sized drives into one larger useful drive, which cannot be done with RAID 0. For example, JBOD can combine 3 GB, 15 GB, 5.5 GB, and 12 GB drives into a logical drive at 35.5 GB, which is often more useful than the individual drives separately.

30 Logically seen as one big storage
RAID Technologies JBOD Illustration of JBOD JBOD is usually known as concatenation where the total storage capacity equals to the sum of each separate disk. Logically seen as one big storage Disk-0 Disk-1 1 65 2 67 Total storage capacity (Σ) = capacity of Disk-0 + capacity of Disk-1 ……. Note that JBOD is different with RAID 0 and it cannot be categorized under any RAID level. JBOD does not perform any data striping. It only enlarges the storage capacity by combining multiple physical drives with different storage capacity into one large virtual storage. JBOD will write data to the first disk drive in the JBOD group until the drive is out of space and then will continue to write the data to the next drive in the group, and so on. 64

31 Summary for Each RAID Technology
RAID Technologies Summary for Each RAID Type Summary for Each RAID Technology RAID Level Data Redundancy Read Performance Write Performance Min. Number of Drives RAID-0 No Superior 2 RAID-1 Yes Very High High RAID-5 Good 3 RAID-10 4 JBOD D-Link Storage Area Network allows migration between RAID levels, but this is dependent on number of HDD drives available. The performance of each RAID level may vary depending on the hardware platform used.

32 Summary: RAID Technologies
Redundant Array of Independent Disks (RAID) is a data storage mechanism that provides better performance and/or data reliability. D-Link network storage appliances support RAID 0, RAID 1, RAID 10, RAID 5, RAID 6 and JBOD to offer greater performance and reliability for D-Link users. Which types of RAID supported is dependent on the models. RAID 0 provides the best performance with the fastest data transfer speed by striping all the data to multiple disks. RAID 1 provides data redundancy by mirroring/duplicating the data from one disk to another disk. RAID 5 offers data protection and increases throughput by creating data parity and distributing it to all the provided disks. RAID 6 offers data protection and increases throughput by creating data parity and distributing it to all the provided disks. Same as RAID 5, but with 2 parity disks. RAID 10 combines both RAID 0 and RAID 1 at once, thus providing greater performance while also serving data redundancy to prevent single point of failure. Just a Bunch of Disks (JBOD) is not a type of RAID mechanism and does not provide data redundancy. It is used for achieving greater storage capacity among all the hard disks, which may come in different sized capacity.

33 Questions and Answers: RAID Technologies
Which RAID level does not support fault-tolerance for the stored data? RAID 0 RAID 1 RAID 10 RAID 5 JBOD Which RAID technology supports the consolidation of all disks with different sizes thus enlarging the capacity of available storage spaces? A, E C

34 Storage Essentials

35 Storage Essentials Storage Essentials After this section, you should gain more knowledge of the following: Basic terminologies commonly used to explain storage technology Different hard drive technologies and the characteristics of each

36 Basic Terminologies Storage Essentials
Basic Terminologies and Concepts Basic Terminologies Block – A sequence of bytes or bits in which data is stored and retrieved on disk and tape devices. Array – A set of physical disks grouped into one or more logical drives. Logical drive - A set of actual physical disks that are grouped together and behave as if it were a single drive as seen by the user. Volume – A set of blocks of storage that are organized and presented for use by the server. Logical Unit Number (LUN) – number assigned to a logical unit. It can be used to refer to an entire physical disk, or a subset of a larger physical disk or disk volume. The physical disk or disk volume could be an entire single disk drive, a partition (subset) of a single disk drive, or disk volume from a RAID controller comprising multiple disk drives aggregated together for larger capacity and redundancy. LUNs represent a logical abstraction between the physical disk device/volume and the applications. For example if you partition a disk drive into smaller pieces for your application or system needs (perhaps your server's operating system has a disk drive size limit) the sub-segments would share a common SCSI target ID address with each partition being a unique LUN. In an iSCSI environment, LUNs are essentially numbered disk drives. An initiator negotiates with a target to establish connectivity to a LUN; the result is an iSCSI session that emulates a SCSI hard disk. Initiators treat iSCSI LUNs the same way as if they were a raw SCSI or IDE hard drive. For instance, rather than mounting remote directories as will be done in NFS or CIFS environments, iSCSI systems format and directly manage file systems on iSCSI LUNs. In enterprise deployments, LUNs usually represent slices of large RAID disk arrays, often allocated one per client. iSCSI imposes no rules or restrictions on multiple computers sharing individual LUNs; shared access to a single underlying file system is instead left as a task for the operating system.

37 Spare Count Storage Essentials Definition of Spare
Basic Terminologies and Concepts Spare Count Definition of Spare Spare is an drive (drive B) which is reserved for the purpose of substituting for another drive (drive A) in case of a failure on drive A. Definition of Hot Spare Hot spare is a drive which has been flagged for use if another drive in the array fails Definition of Spare Count Spare count is the number of drives to be kept available in case a drive which contains a volume (with data) fails. When one of the active drives fails, the hot spare drive will replace the failed drive Active Drives Hot Spare Drive Spare Count = 1 Page is Animated

38 Hard Drive Interface Technologies Overview
Storage Essentials Hard Drive Interface Technologies Hard Drive Interface Technologies Overview ATA (Advanced Technology Attachment) Mostly used in desktops and notebooks Consist of two standards: PATA (Parallel ATA) SATA (Serial ATA) SCSI Serial Attached SCSI (SAS) Fiber Channel* Hard disk drives are accessed over one of a number of bus types, including parallel ATA (PATA, which also called as IDE), Serial ATA (SATA), SCSI, Serial Attached SCSI (SAS), and Fiber Channel. At this point of time, the existing hard drive interfaces are SATA, SAS, SCSI, and Fiber Channel. SATA (Serial ATA) is a storage interface technology which introduces several key advantages, such as full bandwidth to each connected device, hot plug capability, smaller connector, standardized connector placement and layout, simpler cabling, and longer cable length. It transfer data by sending one bit of data at a time. SAS (Serial Attached SCSI) is a data transfer technology which replaces the parallel SCSI bus technology. Its key advantages are similar to SATA. SCSI is a hardware interface that allows for the connection of up to 15 peripheral devices to a single PCI board called a SCSI host adapter that plugs into the motherboard. Fiber Channel is a technology for transmitting data between computer devices at data rates of up to 4 Gbps, and 10 Gbps in the near future. It can be run on both copper cables and fiber optic media. It allows concurrent communications among workstations, mainframes, servers, data storage systems, and other peripherals using SCSI and IP Protocols. * Fiber channel is now commonly used for SAN solutions, but seldom used for end user computers. Though there are Fiber Channel hard drives available in the market, they are hardly found. * Fiber channel is now commonly used for SAN solutions, but seldom used for end user computers. Though there are Fiber Channel hard drives available in the market, they are hardly found these days.

39 Serial ATA Value Proposition
Storage Essentials Hard Drive Interface Technologies Why SATA? End-User Needs More storage in limited space Improved price/ performance Investment protection Lower overall system cost Serial ATA Value Proposition Narrower Cabling Supports lower power requirements Lower pin counts Higher performance (data rates up to 300MBps) Improved connectivity (no master/ slave) Longer cabling (reach up to one meter) System Vendor Needs Dense boxes Similar components Lower power consumption Increased air flow More motherboard space Here are a few advantages to SATA over IDE: SATA cables are thinner and can be longer, thus effectively improving airflow and ease of handling/neatness/flexibility in the computer. SATA drives do NOT have jumper cables, meaning no fussing with Master/slave/cable select settings. SATA transfer rates are generally higher (provides faster speed) SATA handles RAID better MOST important: SATA drives are supposedly Hot-pluggable (or Hot-swappable), meaning that you can plug-and-unplug them while computer is running. Most motherboards that support SATA provide you with external connectors (Power and Data) so, if need be, you can use a SATA drive on different computers just like any USB or other external drives (provided it is not the main drive that has the system installed, of course). Serial ATA offers more features and better performance than parallel ATA Page is Animated

40 Evolution of SATA Storage Essentials
Hard Drive Interface Technologies Evolution of SATA The Serial ATA (SATA) working group will deliver incremental specification releases over the next several years. These enhancements will enable the technology to support a variety of possible storage configurations. Serial ATA II, Phase 2 Second-generation speed grade for desktops and network storage systems (Targeted 300 MB/sec) Improvements to address additional needs in higher-end network storage segments Topology support for dual host active failover Efficient connectivity to larger number of devices Serial ATA II, Phase 1 Improved use of SATA 1.0 technology in server and network storage Backplane interconnect solution for racks of hot-swap drives Complete enclosure management solution (Fan control, drive lights, temperature control, new device notifications, etc) Performance improvement to address industry needs (firmware/ software, performance enhancements, including native queuing) The Serial ATA provides some enhancements which will enable the technology to support a variety of possible storage configurations. Serial ATA 1.0 Primary inside-the-box storage connection to replace parallel ATA Page is Animated

41 SCSI Technology Overview
Storage Essentials Hard Drive Interface Technologies SCSI Technology Overview SCSI (Small Computer System Interface) is a set of standards for physically connecting and transferring data between computers/ servers and peripheral devices. SCSI is commonly used for hard disks and tape drives, but can also be connected to a wide range of other devices, including scanners and CD drives. Characteristics of SCSI: Every device attaches to the SCSI bus in a similar manner. SCSI is a peripheral interface where up to 16 devices (the host adapter counts as one device) can be attached to a single bus (several peripherals can be daisy chained to one host adapter, using only one slot in the bus). There can be any number of hosts and peripheral devices but there should be at least one host. SCSI is a buffered interface: it uses hand shake signals between devices, SCSI-1, SCSI-2 have the option of parity error checking. Starting with SCSI-U160 (part of SCSI-3) all commands and data is error checked by a CRC32 checksum. SCSI is a peer to peer interface: the SCSI protocol defines communication from host to host, host to a peripheral device, peripheral device to a peripheral device. However most peripheral devices are exclusively SCSI targets, incapable of acting as SCSI initiators—unable to initiate SCSI transactions themselves. Therefore peripheral-to-peripheral communications are uncommon, but possible in most SCSI applications.

42 Summary: Storage Essentials
Hot spares are standby hard disk drives which are used as a backup to automatically replace a disk when a failure occurs. Spare count is the number of the hard disk drives provided as backup disks. Currently, there are many hard drive technologies being provided in the market which evolves from time to time. The most well known technologies are SATA, SCSI, Serial Attached SCSI (SAS), and Fiber Channel. SATA is the most commonly used technology today, especially at the end user level, e.g. computer device. SCSI was commonly used for hard disks and tape drives, but can also be connected to a wide range of other devices, including scanners and CD drives. Currently, SCSI is widely used on servers and not on the end user client devices.

43 Questions and Answers: Storage Essentials
What is the benefit of providing a spare disk? To enlarge the storage capacity when all disks have been used to store data. Ensure reliability by designating the spare disk as a standby/backup disk which will be used in case of disk failure. To serve as additional disk for use when scheduled downloading is configured. To serve as part of a RAID when configured, for example, to save mirrored data for RAID 1. Select the hard drive type(s) which offer the key advantages of full bandwidth to each connected device, hot plug capability, smaller connector, standardized connector placement and layout, simpler cabling, and longer cable length. (Choose all that apply) SCSI SATA iSCSI PATA What are the benefits of using SATA hard disks when compared to IDE hard disks? (Choose all that apply) Master/Slave selection Smaller cable connector Speed Hot-pluggable B B, D B, C, D

44 SAN Technologies

45 SAN Technologies SAN Technologies After this section, you should gain more knowledge of the following: Technologies built for Storage Area Network Details about FC SAN technologies and the required components to implement it on the network Details about iSCSI technologies as well as its advantages and the required components to implement iSCSI on SAN

46 SAN Technologies Overview
Technologies lies behind the SAN SAN Technologies Overview Technologies created for building a SAN are primarily based on either Fiber Channel or iSCSI technology. The next few pages explain each of these technologies in greater detail. iSCSI Target iSCSI Initiator D-Link SAN TCP/IP Protocol Private Local Network SAN Copper / Optical cabling for iSCSI connection Ethernet Switch iSCSI Technology

47 Fiber Channel Technology Overview
SAN Technologies Fiber Channel Technology Fiber Channel Technology Overview Fiber Channel (FC) is a channel/network standard defined by the Technical Committee T11, which is the committee within INCITS (InterNational Committee for Information Technology Standards) responsible for Fiber Channel Interfaces FC network contains network features that provide the required connectivity, distance, and protocol multiplexing. Advantages of Fiber Channel*: Solutions leadership Reliable Fast data transfer providing gigabit bandwidth up to 4Gbps Multiple topologies Scalable Congestion free High Efficiency Full suite of services * The information is taken from Fiber Channel Industry Association ( Fiber Channel is a powerful, open ANSI standard, has a well-proven track record of economically meeting the challenge with these advantages, to name only a few: Solutions Leadership - Fiber Channel provides versatile connectivity with scalable performance with the strength of a mature, full market of suppliers With over 100 companies, including all of the top 20 server and storage suppliers, product choices ranging in the 1000ˆs, more than 80 million ports installed to date with an overall market annual revenues of more than a billion dollars [CHECK MATH ON EVERYTHING], you can rest assured that your Fiber Channel investment is preserved, safe and secure. Reliable - Fiber Channel, the most reliable form of storage communications, sustains an enterprise with assured information delivery. Reliability was designed into Fiber Channel standards and products right from the start. 4 Gigabit Bandwidth Now - Gigabit solutions are becoming the norm today. On the horizon is 10 gigabit-per-second data delivery. Multiple Topologies - Fiber Channel supports the most protocols with the most published, open standards and released products for protocols such as SCSI, IP, ESCON, VI, and AV. Fiber Channel was designed to be totally transparent and autonomous to the protocol mapped over it. SCSI, TCP/IP, video, or raw data can all take advantage of high-performance, reliable Fiber Channel network. Scalable - From single point-to-point high-speed links to integrated enterprises with hundreds of servers, Fiber Channel delivers unmatched performance. Congestion Free - Fiber Channel's credit-based flow control delivers data as fast as the destination buffer is able to receive it, without dropping frames or losing data and without the need for upper-layer retries This is one of the exclusive features of Fiber Channel that make it so well suited for block-level storage data networks and interconnects. High Efficiency – Fiber Channel has very little transmission overhead. Most important, the Fiber Channel protocol is specifically designed for highly efficient operation using hardware for protocol offload engines (POE's). Fiber Channel products and installations have long used highly integrated POE's as necessitated by the high-end, high-performance demands of the markets that were early to adopt Fiber Channel, first as an interconnect, then as the key enabling factor to the advent of Storage Area Networking. Fiber Channel simply, and easily, provides the best bang for the buck! Full Suite of Services - Fiber Channel surpasses all interconnects when it comes to already-released standards and products that are required to build a SAN; inside and out. Fiber Channel pioneered and established throughout the market a mature set of storage network services such as discovery, addressing, LUN zoning, fail-over, management, and security.

48 Basic Components of Fiber Channel SAN
SAN Technologies Fiber Channel Technology Basic Components of Fiber Channel SAN Storage devices supporting Fiber Channel Fiber Channel Switch (SAN fabric) Fiber Channel Host Bus Adapter (HBA) Cabling Private Fiber Channel SAN Public Local Area Network FC Host Bus Adapter FC Storage Media Optical cabling for fiber channel connection Fiber Channel Switch Basic hardware components of Fiber Channel SAN are comprised of several physical components as in the following: Storage devices which support Fiber Channel Fiber channel switch to form a SAN fabric Fiber Channel switch provides any-to-any connectivity for servers and storage devices. Two or more interconnected switches will create a SAN fabric. SAN fabric allows to improve the SAN performance while also optimizing the scalability and availability. Fiber Channel Host Bus Adapter (HBA) HBAs are used to connect each host to the FC SAN. Host Bus Adapters consists of hardware and drivers. It is intelligent, providing negotiation with switches and devices attached to the network. It also provides processing capabilities that minimize CPU overhead on the host. Cabling – Copper and Fiber Optic cables are two types of cables used in SANs.

49 iSCSI Technology Overview
SAN Technologies Hard Drive Interface Technologies iSCSI Technology Overview Definition of iSCSI (Internet SCSI) SCSI protocol which enables access to networked storage devices over a TCP/IP network (Ethernet network, WAN, Wireless network, etc) Why iSCSI? – iSCSI Features Error Handling Error checking using CRC (Cyclic Redundancy Check) methodology When iSCSI detects errors it will bring down the session (all TCP connections within the session) and restart it Boot Discovery Advantages of iSCSI Connectivity over long distances Lower costs Easier implementation and management Built-in security iSCSI is Internet SCSI (Small Computer System Interface), an Internet Protocol (IP)-based storage networking standard for linking data storage facilities, developed by the Internet Engineering Task Force (IETF). By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. Advantages of iSCSI: Connectivity over long distances. iSCSI offers wide area network coverage providing a cost-effective long distance connection that can be used as a bridge to existing Fiber Channel SANs thus centralizing the administration of storage systems. Lower costs iSCSI SAN solutions capitalize on the preexisting LAN infrastructure and make use of the much more ubiquitous IP expertise available in most organizations. Simpler implementation and management Managing iSCSI devices for such operations as storage configuration, provisioning, and backup can be handled by the administrator in the same way that such operations for direct attached storage are handled. Built-in security Currently, iSCSI implements CHAP to guarantee secure access to the storage system.

50 Advantages of iSCSI over FC SAN
SAN Technologies Hard Drive Interface Technologies Advantages of iSCSI over FC SAN iSCSI is a better alternative to Fiber Channel SAN for the following reasons: Built on stable and familiar standards providing easier implementation and management Ethernet transmissions can travel over the global IP network and therefore have no practical distance limitation Scalable Creates a SAN with lower cost Interoperability issue Security issue Source : IDC 2006 Sept./Dec. Advantages of iSCSI over Fiber Channel SAN are as the following: Simpler implementation and management iSCSI solutions require little more than the installation of the Microsoft iSCSI initiator on the client server, a target iSCSI storage device, and a Gigabit Ethernet switch in order to deliver block storage over IP. Managing iSCSI devices for such operations as storage configuration, provisioning, and backup can be handled by the system administrator in the same way that such operations for direct attached storage are handled. Solutions, such as clustering, are actually simpler with iSCSI than with Fiber Channel configurations. No practical distance limitation even for MAN or WAN environment SANs have delivered on the promise to centralize storage resources—at least for organizations with resources that are limited to a metropolitan area. Organizations with divisions distributed over wide areas have a series of unlinked “SAN islands” that the current Fiber Channel (FC) connectivity limitation of 10 km cannot bridge. There are new means of extending Fiber Channel connectivity up to several hundred kilometers but these methods are both complex and costly. iSCSI over wide area networks (WANs) provides a cost-effective long distance connection that can be used as a bridge to existing Fiber Channel SANs (FC SANs)—or between native iSCSI SANs—using in-place metropolitan area networks (MANs) and WANs. Lower Cost Unlike an FC SAN solution, which requires the deployment of a completely new network infrastructure and usually requires specialized technical expertise and specialized hardware for troubleshooting, iSCSI SAN solutions capitalize on the preexisting LAN infrastructure and make use of the much more ubiquitous IP expertise available in most organizations. Interoperability issue with Fiber Channel SAN. While Fiber Channel storage networks currently have the advantage of high throughput, interoperability among multi-vendor components remains a shortcoming. iSCSI networks, which are based on the mature TCP/IP technology, are not only free of interoperability barriers but also offer built-in gains such as security. And, as Gigabit Ethernet is increasingly deployed, throughput using iSCSI is expected to increase, rivaling or even surpassing that of Fiber Channel. Security issue with Fiber Channel SAN No security measures are built into the Fiber Channel protocol. Instead, security is implemented primarily through limiting physical access to the SAN. While this is effective for SANs that are restricted to locked data centers (where the wire cannot be “sniffed” as the hardware design makes this difficult. To sniff the wire, a special analyzer will have to be inserted between the host bus adapter and the storage), as the FC protocol becomes more widely known and SANs begin to connect to the IP network, such security methods lose their efficacy. In contrast to Fiber Channel, the implementation of the iSCSI protocol provides security for devices on the network by using the Challenge Handshake Authentication Protocol (CHAP) for authentication. According to IDC, iSCSI market grows with an explosive record of about 108.4% every year. According to IDC, by 2010, iSCSI products will share more than 21% of the storage market.

51 iSCSI SAN Overview SAN Technologies iSCSI SAN components consist of:
Drive Interface Technologies iSCSI SAN Overview iSCSI SAN components consist of: iSCSI Client/ Host (iSCSI initiator) A client device, for example, a server (or PC), which attaches to an IP network iSCSI Client initiates requests and receives responses from an iSCSI target iSCSI Target A device that receives and processed the iSCSI commands, for example, a storage device Server iSCSI Initiator TCP/IP Protocol iSCSI solutions require little more than the installation of the Microsoft iSCSI initiator on the client server, a target iSCSI storage device, and a Gigabit Ethernet switch in order to deliver block storage over IP. iSCSI SAN components are largely analogous to FC SAN components. These components are as follows: iSCSI Client/Host The iSCSI client or host (also known as the iSCSI initiator) is a system, such as a server (or PC), which attaches to an IP network and initiates requests and receives responses from an iSCSI target. Each iSCSI host is identified by a unique iSCSI qualified name (IQN). To transport block-level (SCSI) commands over the IP network, an iSCSI host must either attach an iSCSI Host Bus Adapter (HBA) or install an iSCSI driver, for example the Microsoft iSCSI initiator. To get the latest iSCSI initiator from Microsoft, please check on the following URL: . Or check and download it from the Microsoft Download Center for the latest iSCSI initiator driver. A Gigabit Ethernet adapter (transmitting 1000 Megabits per second--Mbps) is recommended for connecting to the iSCSI target. Like the standard 10/100 adapters, most Gigabit adapters use Category 5 or Category 6E cabling that is already in place. Each port on the adapter is identified by a unique IP address. iSCSI Target An iSCSI target is any device that receives iSCSI commands. The device can be an end node, such as a storage device, or it can be an intermediate device, such as a bridge between IP and Fiber Channel devices. Each iSCSI target is identified by a unique IQN, and each Ethernet port on the storage array (or on a bridge) is identified by one or more IP addresses. iSCSI Target D-Link SAN

52 Summary: SAN Technologies
iSCSI is a network protocol which enables access to storage devices and network storage over TCP/IP networks. D-Link adopts the iSCSI protocol to be used in its D-Link SAN. iSCSI offers several benefits in comparison to Fiber Channel. These include interoperability, scalability, security, cost, and distance limitation. To implement iSCSI on the SAN, all the components must be provided: iSCSI initiator, iSCSI target, and Ethernet switch.

53 Questions and Answers: SAN Technologies
What are the components needed when deploying Fiber Channel SAN? (Choose all that apply) SCSI Storage Switch Fiber Channel Switch FC Host Bus Adapter What component s not needed when deploying iSCSI? Server iSCSI target C, D D

54 D-Link SAN (Storage Area Network)

55 D-Link SAN (Storage Area Network)
After this section, you should gain more knowledge of the following: Various D-Link SAN appliances and differences between each Each part of the hardware in the SAN Key selling points of D-Link SAN appliances Product positioning of D-Link SANs D-Link SAN product interoperability, caching behavior, and common implementation architectures

56 D-Link Storage Area Network
D-Link SAN D-Link Products for Storage Area Network D-Link Storage Area Network DSN-2100 Series DSN DSN-3200 Series DSN DSN DSN-3400 Series DSN DSN xStack Storage with 4-port 1GE Copper for SATA-II Hard Drives in RAID Levels 0, 1, 1+0, and 5 (8 Trays) xStack Storage with 8-port 1GE Copper for SATA-II Hard Drives in RAID Levels 0, 1, 1+0, and 5 (15 Trays) xStack Storage with 1-port 10 GE Fiber for SATA-II Hard Drives in RAID Levels 0, 1, 1+0, and 5 (15 Trays)

57 Components of D-Link DSN-2100 Series
D-Link SAN Components of D-Link SAN Components of D-Link DSN-2100 Series Front Panel Components Front panel after the bezel has been removed Key lock Eight drive bays Power LED Boot and Fault LED Latch Removable Bezel Drive power LED Drive and Activity Fault LED Back Panel Components The xStack Storage unit back panel also has a 10/100 Mbps management port and an RS-232-C DB9 diagnostic/console port. The admin account cannot be deleted in firmware version and above. For security, please be sure to change the password for this account. However, if you lose the password for the admin account, you may use the diagnostic port to reset the password. Diagnostic Port Power Switch Host network connections Power Supply Management Port Reset Switch

58 Components of D-Link DSN-3200 Series
D-Link SAN Components of D-Link SAN Components of D-Link DSN-3200 Series Front Panel Components Removable Bezel Key lock Back Panel Components Power Supply Power Switch Reset Switch The xStack Storage unit back panel also has a 10/100 Mbps management port and an RS-232-C DB9 diagnostic/console port. Host Network Connections Diagnostic Port Management Port

59 Components of D-Link DSN-3400 Series
D-Link SAN Components of D-Link SAN Components of D-Link DSN-3400 Series Front Panel Components Removable Bezel Key lock Back Panel Components Power Supply Power Switch Reset Switch The xStack Storage unit back panel also has a 10/100 Mbps management port and an RS-232-C DB9 diagnostic/console port. The main difference between the DSN-3400 series as compared to the other DSN series (DSN-2100 series and DSN-3200 series) is from its type of host network connection provided. The DSN-3400 series provides one 10-Gigabit Ethernet with XFP transceiver interface, while others provides four or eight Gigabit Ethernet connections with RJ-45 interfaces. Note that the XFP transceiver used to connect to the DSN-3400 series is sold separately. Host Network Connections Diagnostic Port Management Port

60 Management Port and Diagnostic Port
D-Link SAN Components of D-Link SAN Management Port and Diagnostic Port Management Port The management port is used to configure and manage D-Link’s xStack SAN from the PC, either directly connected to the SAN (using a Crossover cable) or connected to the SAN through the use of a hub or switch (using Straight-through cable). By connecting to this management port, the administrator can configure the D-Link SAN through the web GUI. Diagnostic Port The diagnostic port is a console port which uses a RS-232-to-DB-9 port interface. This port can be used if you have direct physical access to the box and is accessed during startup. The diagnostic port performs all admin password resets, sets the download configuration parameters, and accesses the Enclosure Services Test Tool. xStack Storage provides out-of-band management capabilities, which means that management and data traffic are on separate lines. Therefore, the administrator cannot connect the same NIC to the management and host network connection ports. Instead, one NIC must connect to the management port and a different NIC, either in the same PC or a different PC, must connect to the host network connection port. Communication via the management port is encrypted using SSL, without requiring configuration by the user.

61 DSN-2100 Series D-Link SAN D-Link DSN-2100 Series
Volume and RAID support Single RAID Controller (Integrated in ASIC) RAID support (Level 0, 1, 1+0, 5) Supports 1,024 Virtual Volumes (256 accessible per initiator) 1,024 target nodes Online capacity expansion Hot swappable drives Instant volume access Free space defragmentation Auto-detection failed drive Auto-rebuild spare drive RAID level migration Drive roaming-in power off Self-Monitoring Analysis and Reporting Technology (S.M.A.R.T) Hardware Specification Drive Bays: 8 Drive Interface support: SATA-II Storage Capacity: 8TB capacity with 1TB hard drive System Memory: 256MB to 512MB (512MB standard) Cache Memory: 256MB to 4Gb (512MB standard) iSCSI Network Interface: four (4) 1GbE ports iSCSI Network Interface Host Interface: iSCSI Draft 2.0 compliant initiator Connections: 1,024 Hosts Jumbo Frames support Link Aggregation support CHAP authentication Access control of management iSCSI/TCP/IP Full HW Offload VLAN Support (Up to 8 VLANs) Storage Management Embedded IP-based Management GUI SMI-S version 1.1

62 DSN-3200 Series D-Link SAN D-Link DSN-3200 Series
Hardware Specification Drive Bays: 15 Drive Interface support: SATA-II Storage Capacity: 15 TB capacity with 1TB hard drive System Memory: 512MB Cache Memory: 4GB iSCSI Network Interface: eight (8) 1GbE ports Volume and RAID support Single RAID Controller (Integrated in ASIC) RAID support (Level 0, 1, 1+0, 5) Supports 1,024 Virtual Volumes (256 accessible per initiator) 1,024 target nodes Online capacity expansion Hot swappable drives Instant volume access Free space defragmentation Auto-detection failed drive Auto-rebuild spare drive RAID level migration Drive roaming-in power off Self-Monitoring Analysis and Reporting Technology (S.M.A.R.T) iSCSI Network Interface Host Interface: iSCSI Draft 2.0 compliant initiator Connections: 1,024 Hosts Jumbo Frames support Link Aggregation support CHAP authentication Access control of management iSCSI/TCP/IP Full HW Offload VLAN Support (Up to 8 VLANs) QoS support (IETF DiffServ and IEEE 802.1P tag) Storage Management Embedded IP-based Management GUI SMI-S version 1.1

63 DSN-3400 Series D-Link SAN D-Link DSN-3400 Series
Volume and RAID support Single RAID Controller (Integrated in ASIC) RAID support (Level 0, 1, 1+0, 5) Supports 1,024 Virtual Volumes (256 accessible per initiator) 1,024 target nodes Online capacity expansion Hot swappable drives Instant volume access Free space defragmentation Auto-detection failed drive Auto-rebuild spare drive RAID level migration Drive roaming-in power off Self-Monitoring Analysis and Reporting Technology (S.M.A.R.T) Hardware Specification Drive Bays: 15 Drive Interface support: SATA-II Storage Capacity: 15 TB capacity with 1TB hard drive System Memory: 512 MB Cache Memory: 4GB iSCSI Network Interface: one (1) 10GbE ports iSCSI Network Interface Host Interface: iSCSI Draft 2.0 compliant initiator Connections: 1,024 Hosts Jumbo Frames support CHAP authentication Access control of management iSCSI/TCP/IP Full HW Offload VLAN Support (Up to 8 VLANs) QoS support (IETF DiffServ and IEEE 802.1P tag) Storage Management Embedded IP-based Management GUI SMI-S version 1.1 63

64 Key Selling Points of D-Link SAN
Market Analysis for D-Link SAN Products Key Selling Points of D-Link SAN Block data transfer over TCP/IP network using iSCSI Highly integrated single chip solution Built-in RAID controller Built-in IP-SAN Device Manager (IDM) SATA-II support for the hard drive interface Various number of iSCSI interfaces which can be aggregated Jumbo Frame support increases performance up to 20-50%* * Based on information from Storage Networking Industry Association Several key selling points for D-Link’s Storage Area Network: Highly integrated single chip solution which allows the system to handle speeds of over 65,000 inputs/ outputs (I/O) per second. Built-in RAID Controllers provided (RAID 0, RAID 1, RAID 1+0, RAID 5, JBOD) Built-in IP-SAN Device Manager (IDM) which can be accessed from the Web SATA-II support Four (DSN-2100) and Eight (DSN-3200) 1GbE iSCSI network interfaces which can be aggregated using LAG (Link Aggregation) feature to provide higher throughput.

65 Product Positioning for D-Link SAN
Market Analysis for D-Link SAN Products Product Positioning for D-Link SAN The D-Link xStack Storage product family of iSCSI SAN solutions are designed to address the growing high performance storage requirements brought about by the need for better application and database performance, infrastructure consolidation, and robust backup and disaster recovery solutions. D-Link now aggressively addresses these storage requirements at the SMB and enterprise level users by leveraging existing iSCSI and Ethernet technologies and lowering the total cost of ownership for storage area networking solutions over more complex legacy Fiber Channel and slower Network Attached Storage (NAS) solutions. DSN-2100/ DSN-3200 comes with Gigabit Copper interfaces and is mainly targeted at SMB users. DSN-3400 comes with 10-Gigabit Ethernet interfaces* and is mainly targeted for enterprise users. * DSN-3400 provides one 10GbE XFP transceiver interface (transceiver sold separately) accessed via the back panel.

66 Storage Interoperability – SMI-S Storage Device
D-Link SAN D-Link SAN Implementation Storage Interoperability – SMI-S Storage Device Storage Management Initiative – Specification (SMI-S) is a storage standard developed and maintained by Storage Networking Industry Association (SNIA). The main objective of SMI-S is to guarantee interoperability of storage devices among different vendors. D-Link’s SAN series are all designed based on the standard SMI-S version 1.1. Basic concepts SMI-S defines DMTF management profiles for storage systems. The complete SMI Specification is categorized in profiles and sub-profiles. A profile describes the behavioral aspects of an autonomous, self-contained management domain. SMI-S includes profiles for Arrays, Switches, Storage Virtualization, Volume Management and many other domains. In DMTF parlance, a provider is an implementation for a specific profile. A sub-profile describes part of the domain, which can be common part in many profiles. At a very basic level, SMI-S entities are divided into two categories: Clients are management software applications that can reside virtually anywhere within a network provided they have a physical link (either within the data path or outside the data path) to providers. Servers are the devices under management within the storage fabric. Clients can be host-based management applications (e.g., storage resource management, or SRM), enterprise management applications, or SAN appliance-based management applications (e.g., virtualization engines). Servers can be disk arrays, host bus adapters, switches, tape drives, etc.

67 D-Link SAN D-Link SAN Implementation Caching Operation The xStack storage unit contains cache memory for storing and data. The xStack storage unit is capable of caching write operations. Write-back caching saves the system from performing many unnecessary write cycles to the system RAM, so as to provide faster execution. The xStack Storage unit is capable of caching write operations. Write-back caching saves the system from performing many unnecessary write cycles to the system RAM, which can lead to noticeably faster execution. However, when write-back caching is used, writes to cached memory locations are only placed in cache and the data is not written to the disks until the cache is flushed. When caching is disabled, all read and write operations directly access the physical disks. By default, write-back cache mode is always enabled and cannot be disabled.

68 Basic iSCSI SAN Implementation
D-Link SAN D-Link SAN Implementation Basic iSCSI SAN Implementation In the most basic iSCSI SAN deployment, application servers (iSCSI hosts) access their storage from an iSCSI target storage array. iSCSI Host iSCSI Target …… Private LAN Public LAN In the above scenario, clients on the public LAN attach to each server through a network adapter (previously referred to as a network interface card, or NIC). A second Gigabit adapter in each server provides access to a private iSCSI SAN connecting to the iSCSI target storage array through an Ethernet switch. The above architecture is also known as Native SAN architecture/ Implementation. The following are minimal hardware recommendations for all iSCSI deployments: Dual processors in all iSCSI hosts. Two iSCSI network adapters or iSCSI HBA host adapters: One standard 10/100 network adapter (previously known as a network interface card or NIC) for connection to the public LAN One Gigabit Ethernet network adapter for connecting to the target storage array. (Gigabit adapters transmit data at 1000 Mbps and, like standard adapters, connect to Category 5 cabling.) Isolation of the target storage array onto a private network. At a minimum, use of CHAP authentication between iSCSI host and target.

69 D-Link SAN Summary Summary: D-Link SAN D-Link provides three series for its SAN appliance product line which include DSN-2100, DSN-3200, and DSN-3400 series. D-Link DSN-2100 provides eight drive bays while D-Link DSN-3000 series provides 15 drive bays. Generally, all D-Link SANs must have the following components built in: host network connections, management port, diagnostic port, power and reset switch button, power supply, and removable bezel. D-Link SAN appliances are mainly targeted for SMB and enterprise level users who need better application and database performance, infrastructure consolidation, robust backup and disaster recovery solutions. D-Link SAN series is guaranteed to be interoperable with other storage appliances from different vendors because of its achievement for SMI-S standard. By default, all D-Link SANs will cache all write operations to prevent the storage from performing many unnecessary write cycles to the system RAM.

70 Questions and Answers: D-Link SAN
What standard is used to guarantee the interoperability of storage devices among different vendors? IEEE iSCSI SNIA SMI-S Which of the following statement describe D-Link SAN? D-Link SAN supports PAP authentication to provide secure access to the SAN. With D-Link SAN, using diskless server is possible because it can be booted form the iSCSI SAN. D-Link xStack storage cache memory for storing data and writing operations. D C

71 SAN Product Features Overview*
* All features are explained based on DSN-3000 Series.

72 SAN Product Features Overview
After this section, you should gain more knowledge of the following: Tasks/activities that can be done by D-Link SAN Link aggregation and VLAN features supported in D-Link SAN TCP/IP offload engine CHAP authentication Volume virtualization Auto-Detection failed drive and volume rebuild features

73 SAN Product Features Overview
Volume Management Task The xStack Storage unit can automatically, or at the administrator’s demand, perform activities that take time and consume the controller’s resources. The administrator can control, to some degree, when tasks are to be performed. Any task can be suspended and resumed by the administrator. Some tasks can be cancelled and some can be scheduled on a recurring, periodic interval. All tasks can have their priority changed, which controls the amount of resources the xStack storage unit devotes to a task. The tasks/ activities that can be done by D-Link’s SAN are as follows: Volume initialization Volume rebuild* Volume expansion Media scanning Parity scanning * Volume rebuild will be explained later along with explanation of auto-detecting failed drive The xStack storage unit can perform the following tasks: Initialize a volume: some volume organizations (e.g., parity) require initialization. The initialization task performs this action. This task can be performed while an initiator is accessing (reading and writing) data. An initialization task can be suspended and resumed, but cannot be cancelled. Rebuild a volume: when a drive fails, every redundant volume that occupies space on that drive can be rebuilt. For mirror protection, data can be copied from the remaining copy. For parity protection, data can be recreated from the remaining data and parity information. In either case, when the xStack Storage unit finds replacement space on another drive, it performs one rebuild task for each extent that used space on the failed drive. If replacement space is not available on the drives in the pool associated with the volume, and one or more drives exist in the available pool, a drive is obtained from the available pool and automatically moved to the volume's pool. A rebuild task can be suspended and resumed but cannot be cancelled. Expand volume: the administrator can expand the size of a volume. If the volume's organization requires initialization, the initialization of the new space is performed with a grow task. A grow task can be suspended and resumed, but cannot be deleted. An initiator can access the new space while the grow task is being performed. Media scan: the administrator can scan a non-parity volume for media errors by starting a media scan task. This task reads every block in the volume to ensure there are no errors. If there are errors, this task fixes them if possible. A media scan task can be cancelled, suspended and/or resumed by the administrator. It can also be scheduled for a future time and/or at a recurring interval. Parity Scan: The Administrator can scan a parity volume for errors by starting a Parity Scan task. This task reads every block in the volume looking for errors as described for Media Scan to ensure that parity is correct. If parity errors are found, this task corrects the errors. A parity scan task can be cancelled, suspended, and/or resumed by the Administrator. It can also be scheduled for a future time and/or at a recurring interval.

74 Volume Initialization
SAN Product Features Overview Volume Management Volume Initialization Some volume organizations (e.g. parity) require initialization. The initialization task performs this action. This task can be performed while an initiator is accessing (reading and writing) data. An initialization task can be suspended and resumed, but cannot be cancelled. Initialization task consists of: Making the volume XOR consistent Detecting a read error Recovering from read error Some volume organizations (e.g. parity) require initialization. The initialization task performs this action. This task can be performed while an initiator is accessing (reading and writing) data. An initialization task can be suspended and resumed, but cannot be cancelled. Initialization task consists of: Making the volume XOR consistent Detecting a read error caused by a read operation caused by a read operation on the storage wherein the read error is detected during the making of the storage volume XOR consistent. Recovering from the read error during the process of making the volume XOR consistent without terminating the step of making the volume XOR consistent.

75 The Volume-1 has been resized to a bigger size
SAN Product Features Overview Volume Management Volume Expansion All D-Link SAN product series provide volume expansion to flexibly resize a logical drive. The Volume-1 has been resized to a bigger size 100GB 300GB 200GB Current size: 200GB The administrator can expand the size of a volume. If the volume's organization requires initialization, the initialization of the new space is performed with a grow task. A grow task can be suspended and resumed, but cannot be deleted. An initiator can access the new space while the grow task is being performed. Expand to 300GB Volume-1 Page is Animated

76 Parity Scanning SAN Product Features Overview
Volume Management Parity Scanning D-Link SAN provides parity volume scanning to check errors found in that selected volume. This task reads every block in the volume to ensure parity is correct. If parity errors are found, this task corrects the errors. The Administrator can scan a parity volume for errors by starting a Parity Scan task. This task reads every block in the volume looking for errors as described to ensure that parity is correct. If parity errors are found, this task corrects the errors. A parity scan task can be cancelled, suspended, and/or resumed by the Administrator. It can also be scheduled for a future time and/or at a recurring interval.

77 Storage Volume Information
SAN Product Features Overview Volume Management Storage Volume Information Storage volume information provides comprehensive information about the storage volume allocation Information that can be viewed in the storage volume information are: Status of the attached drives (offline or online) Volume Capacity Volume type

78 Event Log SAN Product Features Overview
Device Management Event Log The event log tracks the xStack Storage’s information, warning, and error messages.

79 Link Aggregation SAN Product Features Overview
iSCSI Features Link Aggregation Definition of Link Aggregation: Link aggregation is a way to achieve double data rates by aggregating multi physical links as one logical link. Key benefits of Link Aggregation (LAG): Improved performance High data rates Increased availability Load sharing Key benefits of Link Aggregation (LAG): Combining multiple interfaces into one logical link improves performance because the capacity of an aggregated link is higher than individual link. Link aggregation provides high data rates. If failure occurs to a link with an LAG, the traffic will not be disrupted though the available bandwidth is reduced. With link aggregation, traffics are distributed across multiple links, minimizing the probability that a single link will be overwhelmed.

80 SAN Product Features Overview
iSCSI Features Virtual LAN (VLAN) All D-Link Storage Area Networks support 802.1Q VLAN tagging to segregate traffic into isolated zone for more secure access and to segment the broadcast domain. D-Link SAN supports up to eight VLANs with 1-to-1 mapping between IP subnet and VLAN. Multiple VLANs per physical port with VLAN tag. All physical ports in LAG belong to same VLAN. With this feature, a volume can be configured under a VLAN group so that it will only be accessible by clients under the same VLAN. All D-Link SAN series support IEEE 802.1Q VLAN tagging to segregate traffic into isolated zones for secure access. All xStack Storage models support eight VLANs, one for each IP address. When you create LAGs, you can also indicate whether the LAG is to support a virtual LAN (VLAN).

81 TCP/IP Offload Engine (TOE)
SAN Product Features Overview iSCSI Features TCP/IP Offload Engine (TOE) The major issue of IP storage is the high TCP/IP processing overhead, which constrains servers to performance levels that are unacceptable for block storage transport. TCP/IP Offload is used for reducing the amount of TCP/IP processing handled by the microprocessor and I/O subsystem to help ease server networking bottleneck. The major issue of IP storage is the high TCP/IP processing overhead, which can constrain servers to performance levels that are unacceptable for block storage transport. Only TCP/IP offload technology can provide this level of performance. iSCSI is a good example of using TCP/IP offload to achieve high-performance IP storage. TCP/IP offload Engine (TOE) is one of the technologies that can reduce the amount of TCP/IP processing handled by microprocessor and server I/O subsystem, and thus ease server networking bottleneck. Deployment of TCP/IP offload in conjunction with high-speed Ethernet technologies enables applications to take full advantage of the networking capabilities. Network performance improvements gained from TOE technology can be determined by measuring either the increase in absolute network throughput or the reduction in system resources such as CPU utilization. TOE performance benefits vary with the type of applications being run. Applications with a small network packet size may experience gains in network throughput, while applications with a large network packet size may not show significant network throughput improvements with TOE but may experience a significant reduction in CPU utilization—thereby helping to keep CPU processing cycles available for other business-critical applications such as database, backup storage, media streaming, and file server applications. Applications that require extensive network utilization—such as network backups, network attached storage, file servers, and media streaming—typically benefit the most from TOE technology.

82 SAN Product Features Overview
iSCSI Features CHAP Authentication Challenge Handshake Authentication Protocol (CHAP) is a protocol for authenticating peer-to-peer connection based on the sharing of a ‘secret’ known only to the authenticator and that peer. CHAP authentication is supported in all D-Link SAN product series and is used when an initiator tries to connect to its target, and vice versa. Characteristics of CHAP authentication: Unidirectional/ Bidirectional authentication Secret key is encrypted/ hashed using MD5 algorithm Three way handshake authentication Characteristics of CHAP authentication in D-Link SAN: Unidirectional/ Bidirectional Authentication Unidirectional or One-way CHAP authentication. With this level of security, only the target authenticates the initiator. The secret is set just for the target and all initiators that want to access that target need to use the same secret to start a logon session with the target. Bidirectional or Mutual CHAP authentication. With this level of security, both the initiator and the target needs to create a secret key itself for authenticating each other. In CHAP implementation, the target node (called party) must authenticate the initiator (calling party) and the initiator can also verify the identity of the target node. This results in a two-way authentication, thus providing a more secure environment. CHAP authentication provides three-way handshake authentication where the called party will send a challenge packet to the calling party where the packet is secret key which already hashed using MD5 algorithm. To pass the authentication, the calling party must response back to the called party with the correct answer. To use Challenge Handshake Authentication Protocol (CHAP) authentication, when connecting to an iSCSI target, type the password that will be used during mutual CHAP authentication when an initiator authenticates a target.

83 Volume Virtualization
SAN Product Features Overview Volume and RAID Support Volume Virtualization D-Link xStack storage virtualizes disk storage for use by a customer's host computer (servers). Storage virtualization is the process of grouping together independent storage devices found across a network to create what seems to be a single large storage entity that can be centrally managed. Storage virtualization helps make the tasks of backup, archiving, and recovery easier, and in lesser time, by disguising the actual complexity of the SAN. Benefits of virtualization: High availability Improve capacity utilization Share resources between heterogeneous servers Storage virtualization is the process of grouping together independent storage devices found across a network to create what seems to be a single large storage entity that can be centrally managed. Benefits of virtualization: High availability Improve capacity utilization. Pooling storage Resource sharing between heterogeneous servers (different platform of operating systems, e.g. Windows OS vs Linux OS). A virtualization in a SAN ensures that servers running different operating systems can safely stored on the same SAN.

84 Auto-Detection of Failed Drive & Volume Rebuild
SAN Product Features Overview Volume and RAID Support Auto-Detection of Failed Drive & Volume Rebuild When a drive in the storage array fails, the xStack storage will automatically detect the failed drive and substitutes it with the hot spare drive. A spare drive is normally kept in the available pool, so that the drive will be available for use should another drive fails. Volume rebuild is the activity that recovers data of a failed drive. In this case, data can be rebuilt if the storage system is mirrored (RAID 1) or set for parity (RAID 5). If the storage is mirrored, data will be recovered from the mirrored data in the mirror disk. If parity is created, data inside the failed drive will be recovered using the existing data from active disks and the parity information. When a drive fails, every redundant volume that occupied space on that drive can be rebuilt. For mirror protection, data can be copied from the remaining copy. For parity protection, data can be recreated from the remaining data and parity information. In either case, when the xStack Storage unit finds replacement space on another drive, it performs a rebuild task for each extent that used space on the failed drive. If replacement space is not available on the drives in the pool associated with the volume, and one or more drives exist in the available pool, a drive is obtained from the available pool and automatically moved to the volume's pool. A rebuild task can be suspended and resumed but cannot be cancelled. NOTE: When the D-Link SAN detects that failure occurred on a drive, it will send a notification to the administrator.

85 Array configured with RAID 1
SAN Product Features Overview Volume and RAID Support Drive Roaming D-Link SAN provides feature for safely moving drive in an array . If a drive in an array configured with RAID is accidentally removed, the removed drive can still be recognized using this feature, as long as the drive is configured with RAID that provides fault tolerance (RAID 1 and RAID 5). This is known as drive roaming in power off. Array configured with RAID 1 Steps to move the drives safely: Turn off the array in which the removed drive belongs to Plug the removed drive to any slot in the array Reboot the array Drive-0 Drive-1 1 1 2 2 If a drive is accidentally removed in which it is part of a RAID redundant array*, the removed drive still can be recognized when it is returned back to the original array. Before returning the drive to any drive bay in an array, the administrator must be shutting down the array and then restarting the unit after the drive is returned back to the array. The drive will be recognized once again and the unit should be functioning as it did before it was accidentally removed. Note that moving the drives safely around can only be done when the system is powered off. "Drive Roaming in Power Off" simply means that we can shut the unit down, move the drives to any slot we wish, reboot the unit, and we will find that all of the volumes originally created are alive and well functioning. Notes: * In the case of accidentally removing a drive and returning it back to the array, please be aware that only fault-tolerance volumes like RAID 5 and RAID 1 will be able to recover as described above. Any non fault-tolerance RAID, such as RAID 0 will still be unrecoverable. Clients will still need to save their configuration file that includes information about LUNs, network portals, etc. This will be needed in the case of replacement of the entire unit. The metadata on the drives will provide volume information, but all other configuration information needed are still checked from the configuration file that was saved. This is why it is important a client must ensure this file is saved and kept in a safe location. 3 3 Removed Page is Animated

86 Self Monitoring and Reporting Technology (S.M.A.R.T)
SAN Product Features Overview Volume and RAID Support Self Monitoring and Reporting Technology (S.M.A.R.T) D-Link SAN Series support Self-Monitoring Analysis and Reporting Technology (S.M.A.R.T.), a technology designed to monitor the reliability of hard drives. The purpose of S.M.A.R.T. is to warn a user or system administrator of impending drive failure while time remains to take preventative action — such as copying the data to a replacement device. Features of S.M.A.R.T. technology include a series of attributes, or diagnostics, chosen specifically for each individual drive model. Attribute individualism is important because drive architectures vary from model to model. Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.) is a monitoring system for hard disks to detect and report on various indicators of reliability, with the hope of anticipating failures. With S.M.A.R.T., a hard disk’s integrated controller works with various sensors to monitor various aspects of the drive's performance, determines from this information if the drive is behaving normally or not, and makes available status information to software that probes the drive and look at it. The xStack Storage Array collects the S.M.A.R.T. information and displays it on the management console in two collections. This information consists of: S.M.A.R.T. data that serves as a summary of the overall status. S.M.A.R.T. attributes that are defined differently by each vendor. When viewing the collected information, the administrator may notice a slight delay, as the xStack Storage Array polls this information from the drive (S.M.A.R.T. data is polled from the drive every 10 seconds). How S.M.A.R.T generates report status? In an ATA/IDE environment, software on the host interprets the alarm signal from the drive generated by the “report status” command of S.M.A.R.T. The host polls the drive on a regular basis to check the status of this command, and if it signals imminent failure, sends an alarm to the end user or system administrator. This allows downtime to be scheduled by the system administrator to allow for backup of data and replacement of the drive. This structure also allows for future enhancements, which might allow reporting of information other than drive conditions, such as thermal alarms, CD-ROM, tape, or other I/O reporting. The host system can evaluate the attributes and alarms reported, in addition to the “report status” command from the disc.

87 Summary: SAN Product Features Overview (1)
The xStack Storage unit can automatically, or at the administrator’s demand, performs activities such as volume initialization, volume rebuild, volume expansion, media scanning, and parity scanning. Volume initialization is performed when an initiator (i.e. server) is reading or writing data. With D-Link SAN, the size of a volume can be flexibly expanded up to the maximum capacity of a storage. Media scanning provided in the management console of all D-Link SAN products can be used to scan a JBOD, stripe, mirrored stripe, or mirrored stripe media volume for errors. D-Link SAN provides parity volume scanning to check errors found in the selected volume. Task Manager provides general information for all task activity running on the D-Link SAN. Storage pool information provides comprehensive information about the storage. D-Link xStack Storage series accommodate a 6-cell shrink-wrapped battery pack for backing up the buffer cache contents in case of power failure. D-Link SAN provides an event log feature that tracks the xStack Storage informational, warning, and error messages To increase the data transfer performance and prevent bottleneck from occurring, D-Link SAN is provided with link aggregation feature to double the speed performance, depending on the number of the aggregated links.

88 Summary: SAN Product Features Overview (2)
All D-Link Storage Area Networks support 802.1Q VLAN tagging to segregate traffic into isolated zone for more secure access. TCP/IP Offload is used to reduce the amount of TCP/IP processing handled by the microprocessor and I/O subsystem to ease server networking bottleneck. CHAP authentication provides secured and encrypted authentication mechanism, and is supported in all D-Link SAN product series. It is used when an initiator tries to connect to its target, and vice versa. D-Link xStack storage virtualizes disk storage for use by a customer's host computer (servers) by grouping all storage devices found across a network to become a single large storage entity that can be centrally managed. When a drive in the storage array fails, xStack storage will automatically detect the failed drive and substitute the failed drive with the hot spare drive. S.M.A.R.T. is a technology supported in D-Link SAN series to monitor the reliability of hard drives and to warn a user or system administrator of impending drive failure while time remains to take preventative action

89 Questions and Answers: SAN Product Features Overview
What tasks can be done by D-Link Storage Area Network? (Choose all that apply) Volume Initialization Media Scanning Volume Rebuild Error Correction Volume Shrinkage What cannot be done when an administrator expands a volume and initializes a grow task? Grow task deletion Grow task suspension Grow task resumption All of the above can be done when a grow task is initialized What is the function of TCP/IP Offload Engine in D-Link SAN To bypass requests coming from the client over the network when the storage’s CPU is high To turn off the xStack storage when it detects the TCP/IP utilization is high To safely move drive in an array by turning off the unit To reduce the amount of TCP/IP processing handled by the microprocessor and I/O subsystem A, B, C A D

90 Questions and Answers: SAN Product Features Overview
What is the function of disk virtualization provided by D-Link SAN? To link multiple storage repositories to multiple clients and servers. To group all storage devices found across a network to become a single large storage entity that can be centrally managed To create storage clustering that comprises master storage and slave storage, where the slave serves as a backup of the master To achieve double data rates by aggregating multi physical links as one logical link. What is the benefit of S.M.A.R.T.? Repair failed disk automatically by doing some diagnoses, analyze the main cause of the error, and perform reparation process depends on the analysis result. Provides 100% guarantee of disk failure prevention by regularly predicting each disk condition and provides maintenance to keep each disk in a good condition. Failure anticipation by regularly monitor all hard disks and report on various indicators of reliability, with the hope of anticipating failures. All of the above. B C

91 D-Link Network Attached Storage (NAS)

92 D-Link Network Attached Storage (NAS)
After this section, you should gain more knowledge of the following: Various D-Link NAS appliances and differences between each of them Key selling points of D-Link NAS appliances Functions and applications of D-Link NAS Product positioning of D-Link NAS

93 D-Link Network Attached Storage (NAS)
D-Link Products for Network Attached Storage D-Link Network Attached Storage (NAS) DNS-313 1 Bay SATA Network Storage Enclosure Built-in iTunes, UPnP and FTP Server May be used as USB 2.0 portable hard drive (become a DAS enclosure) DNS-321 2 Bays SATA Network Storage Enclosure RAID 1 support DNS-323 Built-in iTunes, UPnP, and FTP Server USB port for connecting to printer DNS-343 4 Bays STA Network Storage Enclosure RAID 1, 5 support Multi-Functional USB port The USB port provided at each D-Link NAS (except DNS-321) can be used to connect to the printer server or UPS (Uninterruptible Power Supply) or to make the NAS act as a DAS connected directly to a client.

94 D-Link DNS-313 D-Link Network Attached Storage (NAS)
D-Link Products for Network Attached Storage D-Link DNS-313 Device Interface 1 Gigabit Ethernet port 1 USB 2.0 port* Features iTunes and UPnP AV server Scandisk feature Real-time backup alerts Permission settings for user and group Multi-language file name support Scheduled downloads from web or FTP sites Can be used as a USB 2.0 portable hard drive Supported Hard Drive Type One 3.5-inches SATA Standard Drive with capacity support up to 1.5 TB File Sharing Max. User Account: 64 users Max. Group: 10 groups Max. Shared Folder: 45 folders Max. Concurrent Connection: 64 (Samba) / 10 (FTP) Networking Features DDNS FTP DHCP Server/ Client NTP HTTP/ HTTPS CIFS/SMB *USB port is used for connecting to a desktop or notebook as a USB2.0 portable drive.

95 D-Link DNS-321 D-Link Network Attached Storage (NAS)
D-Link Products for Network Attached Storage D-Link DNS-321 Drive Management Multiple hard drive configurations (RAID 0, RAID 1, JBOD, Standard) iTunes and UPnP AV server Scandisk feature User/ group Quota Management File Sharing Support RAID migration (non-RAID to RAID 1) Device Interface 1 Gigabit Ethernet port Supported Hard Drive Type Two 3.5-inches SATA Standard Drive with capacity support up to 1.5 TB Networking Features DDNS FTP / FTP over SSL/TLS DHCP Server/ Client NTP HTTP/ HTTPS CIFS/SMB Jumbo Frames File Sharing Max. User Account: 64 users Max. Group: 10 groups Max. Shared Folder: 45 folders Max. Concurrent Connection: 64 (Samba) / 10 (FTP) Device Management Alerts Power Management Easy Search Utility Multilingual support

96 D-Link DNS-323 D-Link Network Attached Storage (NAS)
D-Link Products for Network Attached Storage D-Link DNS-323 Device Interface 1 Gigabit Ethernet port USB port* Features 4 different hard drive configurations (Standard, JBOD, RAID 0, RAID 1) iTunes and UPnP AV server Scandisk feature alerts Power management Supports BitTorrent USB port supports UPS monitoring and Print Server Support RAID migration (non-RAID to RAID 1) Supported Hard Drive Type Two 3.5-inches SATA Standard Drive with capacity support up to 1.5 TB Networking Features DDNS FTP / FTP over SSL/TLS DHCP Server/ Client NTP HTTP/ HTTPS CIFS/SMB Jumbo Frames File Sharing Max. User Account: 64 users Max. Group: 10 groups Max. Shared Folder: 45 (without BitTorrent), 10 (with BitTorrent) folders Max. Concurrent Connection: 64 (Samba) / 10 (FTP) *The USB port provided on D-Link DNS-323 is used to connect to the print server only

97 D-Link DNS-343 D-Link Network Attached Storage (NAS)
D-Link Products for Network Attached Storage D-Link DNS-343 Drive Management Multiple hard drive configurations (RAID 0, RAID 1, RAID 5, JBOD, Standard) iTunes and UPnP AV server Scandisk User/ group Quota Management File Sharing Device Interface 1 Gigabit Ethernet port 1 USB 2.0 port Supported Hard Drive Type Four 3.5-inches SATA Standard Drive with capacity support up to 1.5 TB Device Management UPS Monitoring Alerts Power Management Easy Search Utility Multilingual support ADS support Auto Power Recovery Networking Features Jumbo Frame DDNS FTP / FTP over SSL/TLS DHCP Server/ Client NTP HTTP/ HTTPS CIFS/SMB File Sharing Max. User Account: 64 users Max. Group: 10 groups Max. Shared Folder: 45 folders Max. Concurrent Connection: 64 (Samba) / 10 (FTP)

98 OLED – Special Display on D-Link DNS-343
D-Link Network Attached Storage (NAS) D-Link Products for Network Attached Storage OLED – Special Display on D-Link DNS-343 Organic Light-Emitting Diode (OLED) is a LED screen that displays information to enable the administrator to easily view and obtain the status and basic information of the DNS-343 Information that can be viewed from the OLED include: System Information Hostname of the DNS-343 Firmware version IP address of the DNS-343 Operating temperature Hard Drive Status Space percentage used on the hard disk Server Status Status of the printer server Status of the UPnP AV server Status of the iTunes server Status of the FTP server

99 Key Selling Point of D-Link NAS
D-Link Network Attached Storage (NAS) Market Analysis for D-Link NAS Products Key Selling Point of D-Link NAS File-sharing across the local network and Internet using FTP and HTTPS Flexible options for array capacity, supporting up to 1.5TB Easy installation Users and Groups/Folder with Quota and permission rights (read/ write) management Appliance servers for network users (printer server, UPnP AV server, etc) iTunes automatic discovery of music stored on the NAS Peer to Peer download client support

100 D-Link NAS Functions and Applications
D-Link Network Attached Storage (NAS) Market Analysis for D-Link NAS Products D-Link NAS Functions and Applications Shares and backup files from multiple clients Remote access via FTP Streams music, photos, and videos from the NAS to a media player Shares printer on the LAN Connects to UPS for monitoring function Downloads shared files from the Internet using BitTorrent Stores recorded video surveillance directly Remote Client (FTP: port 21) Obtains files stored in NAS D-Link Multimedia Player (UPnP AV) UPS Printer Connects through USB port (P2P Connection) Download shared file using P2P connection

101 Product Positioning for D-Link NAS
D-Link Network Attached Storage (NAS) Market Analysis for D-Link NAS Products Product Positioning for D-Link NAS D-Link NAS products are suitable for home user, SOHO and SMB D-Link Network Storage Enclosures address the ever-growing data storage requirements for multimedia and large data files for small to medium business users Need for data consolidation and data sharing make this enclosure an ideal solution Various RAID level support offers advanced data protection This versatile enclosure supports the latest SATA technology and Gigabit Ethernet connectivity for best-in-class performance

102 Summary: D-Link Network Attached Storage (NAS)
D-Link provides four main models for its NAS appliance product line: DNS-313, DNS-323, DNS-321, and DNS-343. All D-Link NAS appliances can be used to act as an iTunes server, UPnP server, FTP server, printer server, and for certain models, D-Link also supports added networking features such as a DHCP server, and advanced features such as quota management and DDNS, etc. D-Link DNS-343 provides an added feature on the box, which is an OLED screen to show certain status information, such as system information, hard drive status, and the server appliance status. D-Link NAS appliances are primarily targeted at home users, SOHO, or SMB users who want the benefits of network storage that is cost effective.

103 Questions and Answers: D-Link Network Attached Storage (NAS)
Which model of D-Link NAS provides OLED screen feature on the box? DNS-313 DNS-323 DNS-321 DNS-343 What are the functions of D-Link NAS? (Choose all that apply) Easy RAID migration and adaptability Play music from iTunes software with the music stored in NAS Stream music, photos and videos to a media server Wireless access of data in the NAS via wireless client Which RAID features are supported by D-Link DNS-323? (Choose all that apply) RAID 0 RAID 1 RAID 5 RAID 10 D A, B, C A, B

104 NAS Product Features Overview*
*All features are explained based on the DNS-343 product

105 NAS Product Features Overview
After this section, you should gain more knowledge of the following: What is the Easy Search Utility and the functions supported in this feature What is the Configuration Wizard and what configuration tasks are available to this wizard What is Alerts The characteristics of power management on D-Link NAS Function of Disk Diagnostic feature Purpose of user and group creation on D-Link NAS The function of quota management Appliance server roles with/without the use of USB port on D-Link NAS Remote Backup Peer-2-Peer (P2P) Downloads Volume/File sharing on D-Link NAS and scheduled downloading

106 Easy Search Utility NAS Product Features Overview
Managing the Device Easy Search Utility Easy Search Utility is provided to help the users find the D-Link NAS on the network. What D-Link Easy Search Utility can: Discover and connect to D-Link NAS products. Map drives Configure the IP of the NAS Easy Search Utility is a software bundled in a package with the D-Link NAS to help users in the network to easily find and access the D-Link NAS around the network. In order to access the Easy Search Utility software, each user must install the software on their client device. With this software, the user can discover the device, connect to the NAS, and configure and manage the D-Link NAS. The user can also map volumes or folders created on the NAS from this software as long as the user has the proper access rights.

107 Configuration Wizard NAS Product Features Overview Managing the Device
The Configuration Wizard is available in all D-Link NAS products, providing easy basic setup, including password setting, time zone setting, LAN connection type setting, and other basic additional information, such as workgroup name, domain name, device name, and description. Password Setting Set a new password for Admin user to access the web manager. Time Zone Setting Set the appropriate time zone for the proper location LAN Connection Type Setting Set the IP address of the device, either by using a static IP or a dynamic IP from the DHCP server. Additional Information Setting Set the workgroup or domain information, name of the device, and its description.

108 Email Alerts NAS Product Features Overview
Managing the Device Alerts With the alerts feature supported in the D-Link NAS product series, alerts can be sent to a specified user if certain operational conditions occur, such as the following: Information about space status A volume is full A hard drive has failed Administrator password has been changed Firmware has been upgraded System temperature has exceeded the specified temperature* *If the system temperature exceeds the configured threshold, an alert will be sent. After the alert has been sent, the D-Link NAS will be powered off for safety reasons.

109 Power Management on D-Link NAS
NAS Product Features Overview Managing the Device Power Management on D-Link NAS Power management offers a green feature on D-Link NAS products. With this feature, the administrator can configure the drives to shut down after a specified idle time. The device will automatically power up when data is being accessed by the client.

110 Disk Diagnostic NAS Product Features Overview
Managing the Device Disk Diagnostic Scandisk activity can be performed to check if any error has occurred on the hard disk. With this feature, all errors found will be listed with a description, along with the option to repair each of these errors. Scandisk can be performed over selected volume.

111 User and Group Creation
NAS Product Features Overview User and Group Management User and Group Creation User and groups can be created and managed on the D-Link NAS product series. The purpose of creating users and groups on the NAS product is to control user access to the storage and to control read/write privileges for specified folders on the network drives, or to setup FTP access rights. By default, all users have read and write access to all folders. Access rules can be created in the Network Access menu.

112 Network Access NAS Product Features Overview
User and Group Management Network Access The Network Access feature is used to assign access rights to a user or a group for specific folders or volumes. This section allows you to assign the access rights for your users and groups to specific folders or volumes. By default, all volumes are open to anyone on the local network with read/write access. Before specific user or group rules can be created, the default rules must be deleted. Oplocks Opportunistic locks (Oplocks) are characteristics of the LAN Manager networking protocol implemented in the 32-bit Windows family. Opportunistic locking (Oplock) is a mechanism that allows a server to tell a client process that a requested file is only being used by that process. The client can safely do read-ahead and write-behind as well as local caching, knowing that the file will not be accessed or changed in any way by another process while the opportunistic lock is in effect. The server notifies the client when a second process attempts to open or modify the locked file. If a client has oplocks disabled, all requests other than read must be sent to the server. Read operations may be performed using cached or read-ahead data as long as the byte range has been locked by the client; otherwise, they too must be sent to the server. When a client opens a file, it may request that the server grant it an exclusive or batch oplock on the file. The response from the server indicates the type of oplock granted to the client. If cached or read-ahead information was retained after the file was last closed, the client must verify that the last modified time is unchanged when the file is reopened before using the retained information. In general, oplocks are guarantees made by a server to its clients for a shared logical volume. These guarantees inform the client that a file's content will not be allowed to be changed by the server, or if some change is imminent, the client will be notified before the change is allowed to proceed. Oplocks are designed to increase network performance when it comes to network file sharing however when using file-based database applications it is recommended to set the share oplocks to No (Off). By default Windows Vista has Oplocks enabled and cannot be disabled. If you are having network performance issues with applications sharing files from the NAS you may want to try to improve performance by setting Oplocks to No (off). To learn more about Opportunistic Locks, please refer to the following URL: Map Archive The archive bit (on Windows file systems) is used to keep track of whether or not a file has been changed since it was last backed up (archived). Enabling the “Map Archive” option will map the (Windows) archive bit to the Linux (UNIX) owner execute bit, so as to preserve this part of the file’s attribute under a Linux file system. The Linux (UNIX) file system lacks the concept of an archive bit. It is recommended that you enable this option if you are performing backups on a Windows system or if you are using applications that require the archive bit. Certain backup software will attach this attribute to files that are being stored as backups, and as such, archive bits are used in incremental backups

113 Quota Management NAS Product Features Overview
User and Group Management Quota Management The D-Link NAS product series supports quota management for groups, folders, and individual users. Assigning quotas to a groups, folders, or users will limit the amount of storage capacity allocated for them. By default users, groups, and folders do not have a quota and therefore the storage space assigned for each user/group/folder is unlimited.

114 Quota Illustration NAS Product Features Overview
User and Group Management Quota Illustration Volume-1 Saving Data Failed!!! Quota Exceeded!!! Current available space for Robert is 1GB Data saved!!! Current available space for Robert is 100MB Data saved!!! Current available space for Robert is 400MB D-Link NAS Saves 200MB data to volume-1 Saves 600MB data to volume-1 Saves 300MB data to volume-1 From the animation above, we can see that by using quotas, one user space for storage capacity can be set and limited up to a certain number. Robert Quota limit for Robert is 1GB Page is Animated

115 FTP Server NAS Product Features Overview
Appliance Servers FTP Server The D-Link NAS product series are equipped with a built-in FTP Server. With this feature, data resources kept in the NAS can be accessed via FTP, both from the inside and outside network. The D-Link NAS product series are equipped with a built-in FTP Server. The server is easy to configure and allows up to 10 users to access the server locally or remotely at the same time. If this feature is activated, the D-Link NAS will immediately act as an FTP server and users or groups can access the NAS from the Internet. Users and groups will only be allowed to access folders for which they have been granted access rights to. Note that each username or group name will only create one profile on the FTP server.

116 UPnP AV Server NAS Product Features Overview
Appliance Servers UPnP AV Server UPnP (Universal Plug and Play) is a set of network protocols that allows devices to connect seamlessly and to simplify the implementation of networks in the home digital environment (data sharing, communications, and entertainment) and/or corporate environments. UPnP AV (Audio and Video) servers store and share digital media, such as photographs, movies, and music to provide hardware-based media streaming services to UPnP AV compatible clients on the local network. The D-Link NAS product series supports media streaming to UPnP AV compatible clients on the local network. Use the UPnP AV server menu to select the media content made available to such clients. By default the UPnP server is enabled. The “root” checkbox specifies access to media content on all volumes and folders on the drive. Two main components of UPnP AV: UPnP MediaServer DCP - which is the UPnP-server (a 'slave' device) that shares/streams media data (like audio/video/picture/files) to UPnP-clients on the network. In this case, D-Link’s NAS box is the UPnP Media Server DCP. UPnP MediaServer ControlPoint - which is the UPnP-client (a 'master' device) that can auto-detect UPnP-servers on the network to browse and stream media/data files from them. When using this function, a UPnP MediaServer ControlPoint is able to auto-detect UPnP-servers on the network to browse and stream media/data files from them.

117 iTunes Server NAS Product Features Overview
Appliance Servers iTunes Server D-Link NAS comes with a feature in which end users can listen to music from iTunes at their own desk with the music files stored in the NAS. With this feature, the iTunes software will automatically detect the folder specified by the administrator. Therefore the administrator must specify a folder that contains a collection of songs stored on the NAS. The D-Link NAS product series feature an iTunes server. This server provides the ability to share music and videos to all the available computers within your local network. If the server is enabled, the folder shared by the NAS will be automatically detected by the iTunes program, and the music and videos contained in the specified directory will be available for streaming over the network.

118 iTunes at the Client Side
NAS Product Features Overview Appliance Servers iTunes at the Client Side Song library stored on the D-Link NAS is automatically detected using the iTunes application on the client side D-Link NAS iTunes server feature is activated on the D-Link NAS Play music from the NAS with iTunes

119 DDNS NAS Product Features Overview
Networking Features DDNS Dynamic DNS (DDNS) allows the hosting of a server using a domain name assigned with a dynamic IP address. DDNS helps to deal with servers publishing IP addresses that constantly change due to the use of dynamic IP addresses. In the D-Link NAS product series, the DDNS feature can be used to make the NAS accessible from a public network. D-Link provides a utility for customers to use the DDNS service provided by (only 1 host may be created using the D-Link DDNS service). Free DDNS service can also be obtained from The DDNS feature allows you to host a server (Web, FTP, Game Server, etc) using a domain name that you have purchased ( with your dynamically assigned IP address. Most broadband Internet Service Providers assign dynamic (changing) IP addresses. Using a DDNS service provider, your friends can use the domain name to connect to your server no matter what your IP address is.

120 Remote Backup NAS Product Features Overview
Networking Features Remote Backup The D-Link NAS Remote Backup allows you to backup the files stored on the NAS to one or more remote NAS devices in order to prevent data loss in the event of a failure. D-Link’s NAS Remote Backup feature uses a Secure Shell (SSH) connection that dynamically creates secure key pairs to encrypt the data, ensuring that your data is reliably backed up or restored securely. Supports 10 concurrent downloads to multiple destination devices, providing efficient and comprehensive backups. When configuring the NAS for backup, the user must decide whether to configure the NAS as a Destination device or a Source device. As a Destination device, the Remote Backup feature allows you to browse to the shared backup folders setup on a Source device. The shared backup folder located on the Source device is encrypted before being sent to the Destination devices.

121 Peer-2-Peer (P2P) Download
NAS Product Features Overview Networking Features Peer-2-Peer (P2P) Download The D-Link NAS P2P Downloads allows the user to share files and folders via torrents. This is a great way to share files with friends, colleagues, and family.

122 NAS Product Features Overview
Networking Features File Sharing D-Link NAS provides two ways to share files to all users over the network Samba Samba is an Open Source/Free Software suite that provides seamless file and print services to SMB/CIFS clients and allows interoperability between Linux/Unix servers and Windows-based clients. FTP For file sharing, D-Link also provides multilingual support for the local user to easily share files without any difficulties. Samba: Unicode FTP Client: Croatian, Cyrillic (Kyrgyz Republic), Czech, Danish, Dutch, English, Finnish, French, German, Greek, Hungarian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Romanian, Russian, Simplified Chinese, Slovenian, Spanish, Swedish, Traditional Chinese, Turkish. Computers communicate in numbers. In texts, each number is translated to a corresponding letter. The meaning that will be assigned to a certain number depends on the character set (charset) that is used. A charset can be seen as a table that is used to translate numbers to letters. Not all computers use the same charset (there are charsets with German umlauts, Japanese characters, and so on). One standardized multibyte charset encoding scheme is known as Unicode. A big advantage of using a multibyte charset is that you only need one. There is no need to make sure two computers use the same charset when they are communicating. Old Windows clients use single-byte charsets, named codepages, by Microsoft. However, there is no support for negotiating the charset to be used in the SMB/CIFS protocol. Thus, you have to make sure you are using the same charset when talking to an older client. Newer clients (Windows NT, 200x, XP) use Unicode.

123 Scheduled Downloading
NAS Product Features Overview Networking Features Scheduled Downloading The D-Link NAS Download Scheduling feature allows the administrator to set up a schedule for downloading folders or files, and backup sessions. By default all local backups and file/folder downloads are in Overwrite mode, meaning that identical files in the destination folder will be overwritten by the source files regardless of which file is newer. Checking Incremental Backup will have the D-Link NAS appliance compare identical file names at the source and destination, and files will only be overwritten if the source file is more recent.

124 Printer attached to the NAS can be accessed from the client side
NAS Product Features Overview USB Port Applications Print Server The D-Link NAS can be directly connected to a printer to make the NAS become a print server. D-Link NAS features a built-in USB print server, giving users the ability to share a printer on their local network. Connect a USB printer to the USB port on the back of the D-Link NAS. It is important to ensure that all the printer’s drivers have been installed on the computers you want to print from. Printer attached to the NAS can be accessed from the client side

125 UPS Monitoring NAS Product Features Overview
USB Port Applications UPS Monitoring An Uninterruptible Power Supply (UPS) can be directly connected to a D-Link NAS through the provided USB port. The purpose of attaching the UPS to the NAS is to provide a way to safely shutdown the NAS in case of a power failure. When a UPS (Uninterruptible Power Supply) is connected to the NAS, the Status screen hides the printer information and displays information about the UPS (such as, the manufacturer, product type, battery power status, and UPS status). Status of the UPS: OL indicates that the UPS is online OB indicates that the UPS is running on battery, meaning that there has been a power failure. In this case, the D-Link NAS will keep running by consuming the battery power of the UPS. Any data should be save immediately to prevent data loss before the battery power on the UPS runs out. LB indicates that the UPS has low battery power.

126 Summary: NAS Product Features Overview (1)
The Easy Search Utility is a feature in D-Link NAS that helps make the administrator's task easier by displaying all the D-Link NAS products found within the subnet. Besides providing NAS discovery, it can also be used to map drives and configure IP addresses. To make device configuration easier, D-Link NAS provides a configuration wizard to perform the basic configuration of the device. This is useful for some users who are unfamiliar with configuring the device. alerts is a feature which warns a specified user, usually the administrator, when certain conditions, as specified by the administrator, are encountered. Power management is a feature designed to help cut down on the energy used by the NAS. With this feature the D-Link NAS will automatically shutdown after being idle for some specified amount of time. D-Link provides the Disk Diagnostic feature which can be used to perform error checking on a disk. This is to ensure the integrity of the data stored on the disk. Users and groups can be created and managed on the D-Link NAS to better control user access to the data stored on the NAS. Quotas can also be applied to users/groups/folders.

127 Summary: NAS Product Features Overview (2)
All D-Link NAS can be set to act as application servers serving added functionality to its clients, such as to act as an iTunes server, UPnP server, FTP server, and print server. Dynamic DNS (DDNS) is a feature which can be used to host a server using a dynamic IP address by giving the host a domain name so it is accessible by the public. With a NAS appliance, file sharing over the network becomes much easier by sharing a volume all at once using the drive mapping feature. File sharing can also be done by using FTP or Samba. The D-Link NAS appliance can be instructed to perform scheduled downloading from a specified URL. An Uninterruptible Power Supply (UPS) can be connected to a D-Link NAS through the provided USB port to provide a safe shutdown after a power failure.

128 Questions and Answers: NAS Product Features Overview
What is the function of the Easy Search Utility feature? To search for files stored in the D-Link NAS based on keywords or file extensions. To find errors that have occurred on the D-Link NAS. To discover D-Link NAS products over the network. To search the activity history saved on the D-Link NAS. What feature on D-Link NAS is used to check errors that have occurred in the hard disk? Scan Disk Media Scanning Parity Scanning Disk Scanning How many concurrent users are allowed to access FTP in D-Link NAS? 1 4 10 Unlimited What are the purposes of USB port provided in D-Link NAS? (Choose all that apply) To make the NAS become a print server if connected to a printer from the USB port. To connect to iPod to synchronize music from the iPod to the NAS. To connect to a USB scanner so it can scan a file directly. To connect to a UPS to enable a safe shut down upon power failure. C A A, D

129 Questions and Answers: NAS Product Features Overview
What feature must be used to publish a D-Link NAS for public access when it is assigned a dynamic IP address rather than a static IP address? DNS FTP Server D-Link UPnP AV Server DDNS What are the appliance server functions supported by D-Link NAS? (Choose all that apply) iTunes Server DNS Server Web Server UPnP AV Server What is the method used to share files on the D-Link NAS if using remote access? FTP Telnet SSH D A, C, E B

130 Applications and Solutions for Network Storage

131 Applications and Solutions for Network Storage
After this section, you should gain more knowledge of the following: NAS application for sample reference SAN application for sample reference

132 NAS Application for SMB Environment
Applications and Solutions for Network Storage NAS Applications NAS Application for SMB Environment Wireless Router The USB port can be attached to a UPS or USB Printer DNS-343 Wireless Clients Wireless LAN Wired LAN Guest-1 Guest-2 Employee-1 Employee-2 Printer is shared by the NAS, therefore can be accessed over the network

133 SAN Application for Server Clustering
Applications and Solutions for Network Storage SAN Applications SAN Application for Server Clustering Server clustering is a group of servers running the same application as a single virtual server. Server clustering prevents a single point of failure. If a server is goes down, another server will replace it and take the role of the primary server. In this scenario, the clustered servers share the same disks in the SAN. Clustered Servers Clustered ERP Servers Goes to Public Network iSCSI SAN Tape Libraries

134 SAN Application for Monitoring Purposes
Applications and Solutions for Network Storage SAN Applications SAN Application for Monitoring Purposes Links are aggregated Gigabit Ethernet Switch SAN Storage Video Server with iSCSI initiator Wired Video Cameras Wireless Camera D-Link Wireless N Router Video Post Processing Server Recorded videos from all cameras are stored directly into the SAN storage Backup Storage


Download ppt "[Storage] Version 1.2 1."

Similar presentations


Ads by Google