Presentation is loading. Please wait.

Presentation is loading. Please wait.

Symantec Storage Foundation The Volume Manager Matters After All

Similar presentations


Presentation on theme: "Symantec Storage Foundation The Volume Manager Matters After All"— Presentation transcript:

1 Symantec Storage Foundation The Volume Manager Matters After All
Jim Denison Sr Principal Systems Engineer Optimize Your Storage

2 Why Symantec? Storage Management & Availability Market
Industry Validation Symantec 21.4% IBM 18.4% MSFT 8.2% Others 25.9% HP 20.0% Sun 6.1% 99% of the Fortune 500 10 of 10 leading commercial banks 10 of 10 leading IT services companies 10 of 10 leading financial data services companies 10 of 10 leading computer software companies 10 of 10 leading telecommunication companies We have been the market leader in file systems for many years. In 2008, we overtook EMC for the #1 spot in core storage infrastructure and we continue to build on that lead. Much of this growth is driven from the expansion in heterogeneous storage infrastructures and the fact that Storage Foundation is the best and only solution for these environments. In the 2009 Magic Quadrant, Command Central was listed in the Leaders quadrant reflecting our continued expansion in the portfolio and strong customer adoption. #1 Storage Infrastructure #1 File System Software #1 Worldwide Clustering Market Leader (revenue)

3 NetBackup For VMware & Hyper-V Patent Pending Single File Restore
De-duplication NetBackup VMDK VMware VMware VMDK VMware VMDK VMware VMDK VMware VMDK VMware VMDK VMware V-Tape VMDK NetBackup introduced an innovative single file restore technology that preserved fast and easy virtual machine restores while maintaining back end storage efficiency. NetBackup 7.0 builds on the patent pending Granular File Restore technology  and provides significant advantages that no competitor can match including: Write to any form of backup destination including disk, tape, VTL or deduplication target. Instantly search for and restore any individual file. No need for time consuming entire virtual machine restore to disk. Disk Tape Entire process is automatic NetBackup discovers files inline Instant single file search Instant single file restore Write to and restore from any destination 4

4 History of Logical Volume Management
VERITAS=Cross Platform Optimize Your Storage

5 History of Logical Volume Management
Optimize Your Storage

6 How is Your Storage is Being Used?
Increase storage utilization by 50%+ Stop buying storage for more than one year Reduce CAPEX and OPEX dedicated to managing storage growth Physical Logical Claimed Host Usage Consumed Un-provisioned Overhead Un-claimed Un-consumed Over-provisioned Misused Effectively Utilized App Usage Storage Team DB / App Admins End Users Server Team Optimize Your Storage

7 Significant Events For Storage
SCSI External Storage Fibre Channel The Information Superhighway (The Internet) Unix Proliferation .com BOOM! The Consumerization Of Digital Data Online Commerce Optimize Your Storage

8 A Top Challenge for IT Administrators: Storage Growth and Wastage
“Storage is the largest capex component of new virtualization projects… %” - TechAlpha 2008 2009 2010 2011 2012 49% compounded annual growth rate (CAGR) -IDC “Gartner storage key metric benchmarking research shows that the average storage utilization rate is 40.8%” “The Future of Storage Management”, Gartner February 2010 “By 2012, users will install 6.5 times the amount of terabytes that they installed in 2008” “Cost Optimization, Key Initiatives and I&O Maturity: What Participants Told Us at the 2009 Data Center Conference”, Gartner February 2010 Optimize Your Storage

9 Advances In Technology
Optimize Your Storage

10 The mid-1800's, punch cards are used to provide input to early calculators and other machines.
1940 is the decade when vacuum tubes were used for storage. 1950 finally, tape drives started to replace punch cards. Only a couple of years later, magnetic drums appeared on the scene. 1956, the first hard drive the IBM 305 RAMAC is the first magnetic hard disk for data storage, and the RAMAC (Random Access Method of Accounting and Control) technology soon becomes the industry standard. It required inch disks to store five megabytes (million bytes, abbreviated MB) of data and cost roughly $35,000 a year to lease - or $7,000 per megabyte per year. For years, hard disk drives were confined to mainframe and minicomputer installations. Vast "disk farms" of giant 14- and 8-inch drives costing tens of thousands of dollars each whirred away in the air conditioned isolation of corporate data centers. JUN. Teletype ships its Model 33 keyboard and punched-tape terminal, used for input and output on many early microcomputers. IBM builds the first floppy disk. IBM introduces the "memory disk", or "floppy disk", an 8-inch floppy plastic disk coated with iron oxide. IBM introduces the IBM 3340 hard disk unit, known as the Winchester, IBM's internal development code name. The recording head rides on a layer of air 18 millionths of an inch thick. AUG. iCOM advertises their "Frugal Floppy" in BYTE magazine, an 8-inch floppy drive, selling for US$1200. AUG. Shugart announces its 5.25 inch "minifloppy" disk drive for US$390. DEC. At an executive board meeting at Apple Computer, president Mike Markkula lists the floppy disk drive as the company's top goal. JUN. Apple Computer introduces the Disk II, a 5.25 inch floppy disk drive linked to the Apple II by cable. Price: US$495, including controller card. The 1980's The introduction of the first small hard disk drives. The first 5.25-inch hard disk drives packed 5 to 10 MB of storage - the equivalent of 2,500 to 5,000 pages of double-spaced typed information - into a device the size of a small shoe box. At the time, a storage capacity of 10 MB was considered too large for a so-called "personal" computer. Sony Electronics introduces the 3.5 inch floppy disk drive, double-sided, double-density, holding up to 875KB unformatted. JUN. Seagate Technologies announces the first Winchester 5.25-inch hard disk drive. JUN. Shugart begins selling Winchester hard-disk drives. JUN. Sony Electronics demonstrates its 3.5 inch microfloppy disk system. SEP. Iomega begins production of the Alpha 10, a 10MB 8-inch floppy-disk drive using Bernoulli technology NOV. Drivetec announces the Drivetec 320 Superminifloppy, offering 3.33MB unformatted capacity on a 5.25-inch drive. DEC. Tabor demonstrates a 3.25-inch floppy disk drive, the Model TC500 Drivette. Unformatted capacity is up to 500KB on a single side. DEC. Amdek releases the Amdisk-3 Micro-Floppy-disk Cartridge system. It houses two 3-inch floppy drives designed by Hitachi/Matsushita/Maxell. Price is US$800, without a controller card. At the West Coast Computer Faire, Davong Systems introduces its 5MB Winchester Disk Drive for the IBM PC, for US$2000. MAY. Sony Electronics announces the 3.5 inch floppy disk and drive, double-sided, double-density, holding up to 1MB. 1983 With the introduction of the IBM PC/XT hard disk drives also became a standard component of most personal computers. The descriptor "hard" is used because the inner disks that hold data in a hard drive are made of a rigid aluminum alloy. These disks, called platters, are coated with a much improved magnetic material and last much longer than a plastic, floppy diskette. The longer life of a hard drive is also a function of the disk drive's read/write head: in a hard disk drive, the heads do not contact the storage media, whereas in a floppy drive, the read/write head does contact the media, causing wear. Philips and Sony develop the CD-ROM, as an extension of audio CD technology. MAY - Apple Computer introduces the DuoDisk dual 5.25-inch floppy disk drive unit for the Apple II line. By the mid-1980's, 5.25-inch form factor hard drives had shrunk considerably in terms of height. A standard hard drive measured about three inches high and weighed only a few pounds, while lower capacity "half-height" hard drives measured only 1.6 inches high. JUN. Apple Computer introduces the UniDisk 5.25 single 5.25-inch floppy disk drive, with the ability to daisy-chain additional drives through it. By 1987, 3.5-inch form factor hard drives began to appear. These compact units weigh as little as a pound and are about the size of a paperback book. They were first integrated into desktop computers and later incorporated into the first truly portable computers - laptops weighing under 12 pounds. The 3.5-inch form factor hard drives quickly became the standard for desktop and portable systems requiring less than 500 MB capacity. Height also kept shrinking with the introduction of one-inch high, 'low-profile' drives. SEP. Microsoft ships Microsoft Bookshelf, its first CD-ROM application. JAN. Commodore gives a sneak preview of a proposed "interactive graphics player", based on a variant of the Amiga 500, with 1MB of RAM. The machine includes an integrated CD-ROM drive, but no keyboard. NOV. The Multimedia PC Marketing Council sets the minimum configuration required of a PC to run MPC-class software: 10-MHz 286 processor, 2MB RAM, 30MB hard drive, 16-color VGA, mouse, 8-bit audio card, 150KBps CD-ROM drive. JAN. Commodore releases the CDTV (Commodore Dynamic Total Vision) package. It features a CD-ROM player integrated with a 7.16-MHz based Amiga 500. List price is US$1000. JUN. Tandy introduces its low-cost CDR-1000 CD-ROM drive for PCs. At US$400, including drive and controller card, it is about half the price of other drives. OCT. Insite Technology begins shipping its 21 MB 3.5-inch floppy disk drive to system vendors. The drive uses "floptical" disks, using optical technology to store data. By 1992, a number of 1.8-inch form factor hard drives appeared, weighing only a few ounces and delivering capacities up to 40 MB. Even a 1.3-inch hard drive, about the size of a matchbox, was introduced. Of course, smaller form factors in and of themselves are not necessarily better than larger ones. Hard disk drives with form factors of 2.5 inches and less currently are required only by computer applications where light weight and compactness are key criteria. Where capacity and cost-per-megabyte are the leading criteria, larger form factor hard drives are still the preferred choice. For this reason, 3.5-inch hard drives will continue to dominate for the foreseeable future in desktop PCs and workstations, while 2.5-inch hard drives will continue to dominate in portable computers. OCT. NEC Technologies unveils the first triple-speed (450KBps) CD-ROM drive. JAN. NEC Technologies ships its quad-speed CD-ROM, priced at US$1000. DEC. Iomega Corp. introduces its Zip drive and Zip disks, floppy disk sized removable storage in sizes of 25MB or 100MB. Since its introduction, the hard disk drive has become the most common form of mass storage for personal computers. Manufacturers have made immense strides in drive capacity, size, and performance. Today, 3.5-inch, gigabyte (GB) drives capable of storing and accessing one billion bytes of data are commonplace in workstations running multimedia, high-end graphics, networking, and communications applications. And, palm-sized drives not only store the equivalent of hundreds of thousands of pages of information, but also retrieve a selected item from all this data in just a few thousandths of a second. What's more, a disk drive does all of this very inexpensively. By the early 1990s, the cost of purchasing a 200 MB hard disk drive had dropped below $200, or less than one dollar per megabyte. NOV. IBM announced the world's highest capacity desktop PC hard disk drive with new breakthrough technology called Giant Magnetoresistive (GMR) heads. Pioneered by scientists at IBM Research, GMR heads will be used in IBM's Deskstar 16GP, a 16.8-gigabyte drive. This brings down the cost of storage to .25 cents per megabyte. NOV. IBM announced a 25GB hard drive. That first hard disk drive in 1956 had a capacity of 5 megabytes. IBM's Deskstar 25GP 25-gigabyte (GB) drive has 5,000 times the capacity of that first drive. It holds either the double-spaced typed text on a stack of paper more than 4,000 feet high, more than six full-length feature films or 20,000 digital images. October 18, IBM raised the bar in hard drive technology with a new family of record-breaking hard drives and a new technology that protects drives against temperature variation and vibration. The 10,000 RPM Ultrastar 72ZX -- the world's highest capacity drive at 73 gigabytes (GB). Paris, FRANCE - June 20, IBM® announced the availability of the 1Gb Microdrive, the world's smallest, lightest and largest capacity mobile hard disk increasing storage by a factor of three The mid-1800's, punch cards are used to provide input to early calculators and other machines. 1940 is the decade when vacuum tubes were used for storage. 1950 finally, tape drives started to replace punch cards. Only a couple of years later, magnetic drums appeared on the scene. 1956, the first hard drive the IBM 305 RAMAC is the first magnetic hard disk for data storage, and the RAMAC (Random Access Method of Accounting and Control) technology soon becomes the industry standard. It required inch disks to store five megabytes (million bytes, abbreviated MB) of data and cost roughly $35,000 a year to lease - or $7,000 per megabyte per year. For years, hard disk drives were confined to mainframe and minicomputer installations. Vast "disk farms" of giant 14- and 8-inch drives costing tens of thousands of dollars each whirred away in the air conditioned isolation of corporate data centers. JUN. Teletype ships its Model 33 keyboard and punched-tape terminal, used for input and output on many early microcomputers. IBM builds the first floppy disk. IBM introduces the "memory disk", or "floppy disk", an 8-inch floppy plastic disk coated with iron oxide. IBM introduces the IBM 3340 hard disk unit, known as the Winchester, IBM's internal development code name. The recording head rides on a layer of air 18 millionths of an inch thick. AUG. iCOM advertises their "Frugal Floppy" in BYTE magazine, an 8-inch floppy drive, selling for US$1200. AUG. Shugart announces its 5.25 inch "minifloppy" disk drive for US$390. DEC. At an executive board meeting at Apple Computer, president Mike Markkula lists the floppy disk drive as the company's top goal. JUN. Apple Computer introduces the Disk II, a 5.25 inch floppy disk drive linked to the Apple II by cable. Price: US$495, including controller card. The 1980's The introduction of the first small hard disk drives. The first 5.25-inch hard disk drives packed 5 to 10 MB of storage - the equivalent of 2,500 to 5,000 pages of double-spaced typed information - into a device the size of a small shoe box. At the time, a storage capacity of 10 MB was considered too large for a so-called "personal" computer. Sony Electronics introduces the 3.5 inch floppy disk drive, double-sided, double-density, holding up to 875KB unformatted. JUN. Seagate Technologies announces the first Winchester 5.25-inch hard disk drive. JUN. Shugart begins selling Winchester hard-disk drives. JUN. Sony Electronics demonstrates its 3.5 inch microfloppy disk system. SEP. Iomega begins production of the Alpha 10, a 10MB 8-inch floppy-disk drive using Bernoulli technology NOV. Drivetec announces the Drivetec 320 Superminifloppy, offering 3.33MB unformatted capacity on a 5.25-inch drive. DEC. Tabor demonstrates a 3.25-inch floppy disk drive, the Model TC500 Drivette. Unformatted capacity is up to 500KB on a single side. DEC. Amdek releases the Amdisk-3 Micro-Floppy-disk Cartridge system. It houses two 3-inch floppy drives designed by Hitachi/Matsushita/Maxell. Price is US$800, without a controller card. At the West Coast Computer Faire, Davong Systems introduces its 5MB Winchester Disk Drive for the IBM PC, for US$2000. MAY. Sony Electronics announces the 3.5 inch floppy disk and drive, double-sided, double-density, holding up to 1MB. 1983 With the introduction of the IBM PC/XT hard disk drives also became a standard component of most personal computers. The descriptor "hard" is used because the inner disks that hold data in a hard drive are made of a rigid aluminum alloy. These disks, called platters, are coated with a much improved magnetic material and last much longer than a plastic, floppy diskette. The longer life of a hard drive is also a function of the disk drive's read/write head: in a hard disk drive, the heads do not contact the storage media, whereas in a floppy drive, the read/write head does contact the media, causing wear. Philips and Sony develop the CD-ROM, as an extension of audio CD technology. MAY - Apple Computer introduces the DuoDisk dual 5.25-inch floppy disk drive unit for the Apple II line. By the mid-1980's, 5.25-inch form factor hard drives had shrunk considerably in terms of height. A standard hard drive measured about three inches high and weighed only a few pounds, while lower capacity "half-height" hard drives measured only 1.6 inches high. JUN. Apple Computer introduces the UniDisk 5.25 single 5.25-inch floppy disk drive, with the ability to daisy-chain additional drives through it. By 1987, 3.5-inch form factor hard drives began to appear. These compact units weigh as little as a pound and are about the size of a paperback book. They were first integrated into desktop computers and later incorporated into the first truly portable computers - laptops weighing under 12 pounds. The 3.5-inch form factor hard drives quickly became the standard for desktop and portable systems requiring less than 500 MB capacity. Height also kept shrinking with the introduction of one-inch high, 'low-profile' drives. SEP. Microsoft ships Microsoft Bookshelf, its first CD-ROM application. JAN. Commodore gives a sneak preview of a proposed "interactive graphics player", based on a variant of the Amiga 500, with 1MB of RAM. The machine includes an integrated CD-ROM drive, but no keyboard. NOV. The Multimedia PC Marketing Council sets the minimum configuration required of a PC to run MPC-class software: 10-MHz 286 processor, 2MB RAM, 30MB hard drive, 16-color VGA, mouse, 8-bit audio card, 150KBps CD-ROM drive. JAN. Commodore releases the CDTV (Commodore Dynamic Total Vision) package. It features a CD-ROM player integrated with a 7.16-MHz based Amiga 500. List price is US$1000. JUN. Tandy introduces its low-cost CDR-1000 CD-ROM drive for PCs. At US$400, including drive and controller card, it is about half the price of other drives. OCT. Insite Technology begins shipping its 21 MB 3.5-inch floppy disk drive to system vendors. The drive uses "floptical" disks, using optical technology to store data. By 1992, a number of 1.8-inch form factor hard drives appeared, weighing only a few ounces and delivering capacities up to 40 MB. Even a 1.3-inch hard drive, about the size of a matchbox, was introduced. Of course, smaller form factors in and of themselves are not necessarily better than larger ones. Hard disk drives with form factors of 2.5 inches and less currently are required only by computer applications where light weight and compactness are key criteria. Where capacity and cost-per-megabyte are the leading criteria, larger form factor hard drives are still the preferred choice. For this reason, 3.5-inch hard drives will continue to dominate for the foreseeable future in desktop PCs and workstations, while 2.5-inch hard drives will continue to dominate in portable computers. OCT. NEC Technologies unveils the first triple-speed (450KBps) CD-ROM drive. JAN. NEC Technologies ships its quad-speed CD-ROM, priced at US$1000. DEC. Iomega Corp. introduces its Zip drive and Zip disks, floppy disk sized removable storage in sizes of 25MB or 100MB. Since its introduction, the hard disk drive has become the most common form of mass storage for personal computers. Manufacturers have made immense strides in drive capacity, size, and performance. Today, 3.5-inch, gigabyte (GB) drives capable of storing and accessing one billion bytes of data are commonplace in workstations running multimedia, high-end graphics, networking, and communications applications. And, palm-sized drives not only store the equivalent of hundreds of thousands of pages of information, but also retrieve a selected item from all this data in just a few thousandths of a second. What's more, a disk drive does all of this very inexpensively. By the early 1990s, the cost of purchasing a 200 MB hard disk drive had dropped below $200, or less than one dollar per megabyte. NOV. IBM announced the world's highest capacity desktop PC hard disk drive with new breakthrough technology called Giant Magnetoresistive (GMR) heads. Pioneered by scientists at IBM Research, GMR heads will be used in IBM's Deskstar 16GP, a 16.8-gigabyte drive. This brings down the cost of storage to .25 cents per megabyte. NOV. IBM announced a 25GB hard drive. That first hard disk drive in 1956 had a capacity of 5 megabytes. IBM's Deskstar 25GP 25-gigabyte (GB) drive has 5,000 times the capacity of that first drive. It holds either the double-spaced typed text on a stack of paper more than 4,000 feet high, more than six full-length feature films or 20,000 digital images. October 18, IBM raised the bar in hard drive technology with a new family of record-breaking hard drives and a new technology that protects drives against temperature variation and vibration. The 10,000 RPM Ultrastar 72ZX -- the world's highest capacity drive at 73 gigabytes (GB). Paris, FRANCE - June 20, IBM® announced the availability of the 1Gb Microdrive, the world's smallest, lightest and largest capacity mobile hard disk increasing storage by a factor of three Keeping It Safe Optimize Your Storage

11 Almost Caught On….. Optimize Your Storage

12 That's My Data In Your Pocket
Optimize Your Storage

13 One Constant Other than Death and Taxes
Every Open Systems Operating System must utilize a Volume Manager to create storage containers suitable to house data. Symantec provides the industry, both Enterprise and Consumer, with the mechanism to freely house their data in a logical containers formed from whatever physical Storage they choose. Providers of other Volume Managers are tied to a platform initiatives. Symantec’s VERITAS Storage Foundation enables data container movement between Operating Systems and or Storage Platforms. Optimize Your Storage

14 Why Symantec: The Complexity of Heterogeneity
ServiceGuard PolyServe JFS2 ReiserFS SVC LVM GPFS UFS Sun Cluster MSCS ASM DLM SAM FS QFS TrueCluster GeoSpan Ext3 ZFS TOOLS NEEDED: LDM SVM SAN-FS JFS ClusterFrame PowerHA EVM SDS OCFS QFS 50+ 27 26 32 31 28 30 29 23 18 17 16 19 20 24 22 21 25 36 49 44 43 45 46 48 47 42 41 37 35 34 15 38 40 39 33 10 05 06 04 03 01 02 14 07 12 13 08 11 09 TALKING POINTS Start by explaining that this is the set of features required to connect applications to data and keep the applications and data available File system and volume manager manage the storage and make it easier for applications and users to leverage that storage Multipathing ensures the servers can reach the data and in some cases, it can optimize that path Replication and snapshots ensure data availability in the case of data corruption, storage hardware failure or a complete site outage Clustering ensures servers and applications are available in the event of server hardware failure or application issues ADVANCE SLIDE If you have a heterogeneous environment, the complexity becomes immediately apparent To connect your applications to data and keep them available, you may have to use up to 50 different tools Nobody actually has 50 tools, but most customers we talk to have all 5 server platforms and 2 or three storage vendors Even if you only have EMC, but you have some Symmetrix and Clariion, you still have two different tools for Replication and two different tools for Snapshots – from the same vendor This is where you should personalize this to the customer’s environment. Talk about known software they have and try to call out a few of the places where they are using multiple tools for the same thing. This is easy with EMC as done in the bullets above. SAN Copy SnapShot TrueCopy MirrorView SnapView Snap DoubleTake RepliStor MPxIO MPIO InstantImage FlashCopy PPRC ShadowImage HDLM PowerPath ShadowCopy TimeFinder SRDF SNDR SecurePath MirrorDisk -UX Data Replication Manager

15 Why Symantec: The Simplicity of Standardization
TALKING POINTS At VERITAS, we are working to help eliminate some of the complexity for you by developing one solution across all platforms and arrays Storage Foundation HA is the VERITAS Storage Foundation and VERITAS Cluster Server This integrated solution will help reduce indirect costs like the ones IDC was talking about AND there are opportunities to reduce direct costs. Many of the software packages on the previous slide are things you pay for, some of them you pay quite a bit of money like HP Operating Environments and Snapshots and Replication software on the array side We will look at the substantial direct cost saving opportunities in detail in a few minutes OBJECTION HANDLER SF for Windows doesn’t have a file system, so someone could accurately point out that you need two tools. This is true, casually dismiss this slide as a little marketing fluff, but re-emphasize the core point. TOOLS NEEDED: 1 Storage Foundation HA/DR

16 Dynamic Multi-Pathing Core To Volume Manager
Storage Specific Multi-Pathing Key Things to Consider Storage vendor lock-in results in % price premiums on storage Vendor A Multipathing Vendor B Multipathing DMX Clariion USP-V Multipathing is a Must-Have for performance and data availability It defines what storage can be connected to a server Strategic source of hardware vendor lock-in Free to Choose the Most Cost Effective Storage Same Multi-Pathing Core To Volume Manager VERITAS Dynamic Multipathing (DMP) DMX Clariion HDS – AMS IBM XIV Commodity Optimize Your Storage

17 Storage Foundation HA/DR Capabilities
Centralized Management Storage Pooling End-to-End Provisioning Storage Tiering Online Data Protection N+1 Cluster Fast Failover FC LUN 1 FC LUN 2 FC LUN 3 Large ATA Shared LUN Vol1 Vol2 Vol3 QoSS FS FC ATA FC ATA NBU Physical Machine A 1 2 3 4 5 Physical Machine B Standby Machine App 1 App 2 App 3 App 4 App 5 SAN Disk Group VS

18 Need for Automated, Non Disruptive Storage Tiering
Driver Over 50% YoY data growth in Enterprise environments Rejuvenation of storage tiering Advent of SSD/FC/SATA/SAS within a storage enclosure Heterogeneous storage environment increasingly a norm Today we are seeing a trend in the datacenter where data centers experience over 50% YoY data growth. This explosive data growth trend makes the job of the data center administrators less and less manageable without proper tools. On top of this, we are seeing that the government is imposing tighter and stricter compliance and regulatory requirements on datacenter management. Now datacenter administrators not only need a tool to manage storage growth, but they also need to account for “what data they have, where it is located, and who is accessing it”. From these 2 major trends derive the needs for data classification, cost control and performance improvement for the data centers. Data classification is necessary since datacenter administrators need to know what data they have, where they are located and who is accessing it. With a full account of their data, data administrators can further categorize data according to their values. Data classification provides a foundation for storage tiering since datacenter administrators now know what to put where in the storage tiers. The explosive data growth drives up the cost of storage devices in the datacenters. To lower this cost, data center administrators can purchase different types of storage for different types of data and thus have different tiers of storage. Manually moving data between tiers will only drive up the operation expenses in the long run. This calls for a tool that can automatically move data between tiers according to pre-defined policies without human intervention. Performance is also a concern when there is so much data. Datacenter administrators want to provide proper service levels for various applications according to their agreement with the application owners. Storage tiering can help in this regard. Data needed for higher SLA can be placed in high performing tier while data for lower SLA can be moved to lower tiers. This way the data can be served from the appropriate tiers and it thus also reduces contention and increases performance. Now you see that the trends lend themselves to storage tiering. The data migration between the tiers should be automatic and policy driven. Most importantly it should be done without any changes to how the applications are set up. All data not equal 1 Cost 2 Performance 3 Categorize data according to their business value Value can be derived from age of data, nature, access pattern, etc. Save on CAPEX with improved storage utilization Save on OPEX with automated data movement Move hot data to high performing tier (e.g. SSD) Move less-valued data to lower tiers to avoid contention Need to deliver the promise of tiering without re-architecture of data center Optimizing Your Storage Environment with Veritas™ Storage Foundation

19 Use DST for Compliance and Document Lifecycle Management
Leading Bank in China Policy Rule1. Rule2, …, RuleN Compliance: transaction records need to be stored for 20 years Large number of transaction records scanned and stored in storage daily Data generated at the rate of 1.3 TB/week Leverage Thin Provisioning for Tier 3 storage: Previously generated data pushed to tier 3 over time Tier 3 can grow on physical layer as needed /Images /financial /WkOld /YearOld Multi Volume File System DST has a strong foothold in the banking and financial sectors. For instance, a bank in China needs to meet the compliance rule to keep their transaction records for 20 years. These transaction records are scanned into their system and stored in storage on a daily basis. The volume of data generated amount to 1.3 TB/week. This calls for using DST to properly classify their data according to the age of the data. Tier 1 can keep around a week worth of data. After a week, the bank enforces their DST policy to move the data from tier 1 to tier 2. After a year, the data is moved to tier 3. Current Week Old Year Old V1 V2 V3 Tier1 Tier2 Tier3 TP Pool Disks Optimizing Your Storage Environment with Veritas™ Storage Foundation

20 Perform Storage Migrations Online Over Any Distance
Migrate Arrays Locally Migrate Over Distance Synch/Asynch Replication Veritas Storage Foundation SF, VCS SF, VCS Wizard based configuration Configured during business Hours / Scheduled for non- Business hours Unattended Migration No Application Downtime Performed Over IP without expensive Infrastructure Data Center Migration Allows for easy back out if issues detected Utilize SmartMove to go Thin! Optimize Your Storage

21 Standard Block Level Migration With Storage Foundation SmartMove™
Migrating to Thin Storage Eliminate Waste with SmartMove™ in Storage Foundation Standard Block Level Migration With Storage Foundation SmartMove™ Migrating online typically involves FULL ‘block level’ copy to Thin Devices This applies to ALL block level copy methods (Host, Network, Appliance, Array) Online reclamation as you migrate to Thin Storage Hardware independent and works with ALL thin storage vendors Industry’s only solution for online thick to thin conversions Standard Block Level Migration Dedicated in array Host Volume Data in file system Thin Storage Legacy Storage With SF SmartMove ™ Dedicated in array Host Volume Data in file system Thin Storage Legacy Storage Customers that deploy Thin Storage may be interested in reducing storage spend for new applications. But we have already run into customers that are particularly interested in reclaiming unused space from existing applications by migrating them to Thin Storage. Because of the problems with mirroring from Thick to Thin, this is not an easy task. To Migrate and Reclaim, customers have the following options: Take the application offline, and do file system level copies from the old thick storage to the new thin storage. This reclaims storage. But it also results in extensive application downtime because of the amount of storage that needs to be copied. And Traditional block level copy methods cannot figure out the blocks that contain data and only move those blocks from legacy arrays to thin provisioned arrays. As a result, the migration completely fills up the new, thin LUN, meaning that physical capacity must be allocated for the entire virtual volume, and defeats the purpose of thin provisioning. This the nature of block based operations. This issue is the reason why hardware vendors advise against traditional host based mirroring when using Thin Storage. The interesting thing for Storage Foundation is that this issue applies to ALL methods of block mirroring. Network based solutions such as IBM SVC for example are inherently Thin Provisioning Unfriendly. And contrary to host based solutions like Storage Foundation, they can’t really do anything about this problem. They are too far from the file system. To help IT managers migrate their existing legacy storage to a thin environment, Symantec has announced Veritas SmartMove™ Source (Legacy) 70% Wasted Storage Target (Thin) 70% Wasted Storage Source (Legacy) 70% Wasted Storage Target (Thin) 0% Wasted Storage Optimizing Your Storage Environment with Veritas™ Storage Foundation

22 Reduce Storage Consumption – Staying Thin
Reduce Storage Consumption Reduce Storage Consumption – Staying Thin Physical storage “assigned” Write Data into New Volume Now 2010 Create new “Oversize” Volume Thin Reclamation \Data\Ora\DB TP Storage consumed Data Deleted X X X X X X X TP Storage Remains Unchanged “TP LUN utilization typically ends up worse in terms of utilization after months of use than a comparable “Thick” LUN” Reclaim Unused Storage Online/Schedule Drives up utilization Reclamation API Data Deleted New Data Inefficient TP Storage Usage End of File System Writes Triggers unnecessary LUN growth File System Back end LUN growth triggered as File System is consumed Thin File System NTFS Ext3/4 Storage Allocated Allocated Allocated New Data Data Deleted Back end LUN growth triggered as File System is consumed Optimized Storage Usage Re-uses free FS blocks Avoids Unnecessary LUN Growth File System VxFS Storage Allocated Allocated Unused

23 Example: Financial Benefits of Thin Provisioning
Eliminate wasted storage Hardware based Thin Provisioning 510 Storage 306 Getting Thin & Staying Thin with Thin Reclamation 153 Data 153 153 153 Traditional "Thick" EnvironmentTraditional "Thick" Environment Thin Environment Thin with Symantec Source: Symantec / Enterprise Strategy Group, 2010 Symantec is the ONLY solution for Optimizing Thin Provisioned Environments Optimize Your Storage

24 Cluster File System (Some Use Cases)
Faster Failovers Achieve sub-minute failovers - 90% faster failover times “Don’t pay the RAC tax!” Eliminate the cost and complexity of Oracle RAC Performance for Parallel Apps Scalable shared storage means near linear scalability and performance for parallel applications Improve Storage Utilization Share storage between servers eliminating islands Eliminate multiple copies of data Clustered NFS Scale storage and server capacity independently Improve HA and storage management clients San storage CFS Servers Clustered NFS On the second slide for clustered NFS its too bad there isn’t enough space to write something along the lines of scale storage and server capacity independently … For shared storage on second slide we often times consider shared binaries and configs (install app once – use it on many servers) we also consider storage pooling – pool luns together rather than having islands of storage on a per server basis (the white space is pooled – no need for additional provisioning when you fill up your LUNs) For the parallel box – we have intelligence in CFS that knows when it can act like VxFS under parallel environments and we have distributed meta data so not only are we great for those parallel apps – but more importantly , unlike other cluster file systems – our performance scales linearly – double the servers gives you 2x the performance ..  not the case with native tools.

25 Optimize Storage Achieve End-to-End Visibility
Drive Storage Efficiencies Reduce Acquisition Cost Physical Logical Claimed Host Usage Consumed App Usage Time allotment: 2 – 3 minutes Script: 3 things to bring up: target or buyer; utilization, optimization etc. challenges; and ask them to think about a customer with a similar problem. Close with: Start thinking about a customer you’d like to bring up one of these opportunities with. Know what you have Create accountability Plan for capacity Reclaim under utilized assets Dedupe, compress & resize Move to thin, stay thin Commoditize storage Switch vendors non-disruptively Place data on right tier of storage Optimize Your Storage

26 Discover, Correlate and Display
Optimize Your Storage

27 Why Symantec: The Simplicity of Standardization
TALKING POINTS At VERITAS, we are working to help eliminate some of the complexity for you by developing one solution across all platforms and arrays Storage Foundation HA is the VERITAS Storage Foundation and VERITAS Cluster Server This integrated solution will help reduce indirect costs like the ones IDC was talking about AND there are opportunities to reduce direct costs. Many of the software packages on the previous slide are things you pay for, some of them you pay quite a bit of money like HP Operating Environments and Snapshots and Replication software on the array side We will look at the substantial direct cost saving opportunities in detail in a few minutes OBJECTION HANDLER SF for Windows doesn’t have a file system, so someone could accurately point out that you need two tools. This is true, casually dismiss this slide as a little marketing fluff, but re-emphasize the core point. TOOLS NEEDED: 1 Storage Foundation HA/DR

28 Questions? Name Title Optimize Your Storage


Download ppt "Symantec Storage Foundation The Volume Manager Matters After All"

Similar presentations


Ads by Google