The Oracle Storage Story, Now and Tomorrow Chris Wood Director of Product Management, Axiom LOB
A simplified History of Sun/Oracle Storage In 1999 Sun acquired MaxStrat (and me in the process) to get it’s first real RAID array. The famous (infamous) T3. (Project Purple) Sun was not skilled in acquisitions; Sun managed to “incent” 90% of the MaxStrat development team to quit within the first 90 days. Leaving a half completed program with no skills to complete it. About 2 years later the T3 finally shipped with about ½ the features it was supposed to have and little or no RAS. We sold over 36,000 of them and then it started to break… Everything hit the fan – I got good at ducking.
If at first you don’t succeed… Buy another company with Magic Technology: Pirus networks (2002) Pirus had a distributed architecture crossbar virtualization engine that was supposed to do NAS, SAN, offer any to any connectivity, rich data services, fast as all get out and be the industries first totally virtualized array. (6920) Unfortunately, this did not happen and the remnants were sold off to LSI/Engenio for pennies on the dollar Sun also bought LSC in 2001 and obtained SAM-QFS but had no clue what to do with it. (The “Box” Mentality) Now Sun had no competitive storage offerings; Some cool storage S/W but no idea how to exploit it, so what to do?
Give up, and OEM something that works Sun OEM’s “Big Fish” from HDS – The 99xx line Sun OEM’s “Little Fish” from LSI – The 6xxx line. Both products work just fine, so Sun decides to develop it’s own UI CAM (Common Array Manager) vs. use the well respected HDS High Command and LSI’s SANtricity. In keeping with past failures, CAM was universally disliked by customers, slow, buggy and short on function; otherwise a great product. There was one other problem: It’s very hard to make money on OEM’d products – Especially if you have daily channel conflict with your major vendor HDS. Now What?
Fishworks Sun finally asks the right question: What exactly is storage? Disks and Enclosures: Commodity, low value “stuff” Controller Hardware: Memory (DRAM), microprocessors, I/O ports and a ton of software – A server for instance – We make those! What Glue is missing? File System, Volume Manager, RAID code etc. – ZFS – Gosh, we own that also… Management UI – Fishworks and D-Trace Analytics Decision: Let’s build the Glue and tie the other parts that we already have together – Thus was ZFSSA born.
Today Oracle has two great storage offerings: Axiom 600 and ZFSSA – Similar but different, more on this later Oracle has the premier tape offering in the world, hands down Oracle has re-discovered SAM-QFS – There’s still nothing quite like it in the world. Oracle has doubled-down on storage and will announce complete product refreshes for both the Axiom and the ZFSSA at Oracle Open World with availability this year. Sneak peak coming.
Jim Cates on Tape… His response to my comment: “the SL8500 is really cool” The T10000C tape drive is also another fairly sophisticated product. The tape is 5 microns thick and we do fast search down tape at about 12 m/s (26 mph). The drive writes ~ 3 micron data tracks on tape at roughly 11 mph to a placement accuracy of tenths of microns. We place 3584 tracks on 1/2 inch tape using 32 data channels operating in parallel. Oracle designs everything for this drive including the recording head film stack, servo pattern, tape path, rd/wrt channels, compression/encryption algorithm, analog/digital chips, etc. We arguably control the largest breadth of dimensions associated with a product. The recording heads have films measured in angstroms. The media length is about 1 kilometer. This is a difference of about 13 orders of magnitude. The rise time of the head and channel is specified in nanoseconds. The media shelf life is 30 years. This difference spans 17 orders of magnitude. CW Comment: Tape is way cool!
But Tape is Dead… Every analyst says tape backup is declining – And they are correct! D2D (VTL’s etc) are winning: Faster, Deduplication, Sex Appeal, whatever. Then along came Government Regulations, Politicians and Lawyers. Thank God! Data retention policies Searchable Archives Project “Carnivore” Big Data Mobile Devices All that stuff, that nobody will (or should¹) read has to go somewhere ¹Privacy?
15% 80% Why Tape: Economics of Tiered Storage Tape is the Foundation: Most of the Data Stored at the Lowest Cost 5% Current (7-30 days) : Flash/Performance Disk Frequent changes Immediate access Instantaneous protection Recent (30-90 days) : SAS/SATA Infrequent changes modifications change classification to “Current” Slight access delay acceptable Archival (>90 days) : Tape Very infrequent / no changes Offsite / offline / nearline protected Result: Tape is growing, Backup is not
Storage for archive and retention is a $3B Market growing to over $7B in 2017 Use case for archive storage is becoming distinct from primary or backup use case Tape is established as primary storage tier for long-term archive retention IDC Market Analysis. Worldwide Archival Storage Solutions Forecast: Archiving Needs Thrive in an Information-Thirsty World. (IDC #230762) Analysts: Digital Archive Market Tape is the largest tier Archive Capacity and Revenue: 2009-2017 (EB and $M)
Press: Tape “ Archive, legislation, need for off-site data storage, disaster recovery, dealing with massive data quantities all mean there is a place for tape. Even as semi-primary storage, tape can have a role to play.” Clive Longbottom in Dave Bailey’s column, Computing “(Tape) gives true offsite vaulting for disaster recovery, and requires much less power and cooling. We believe tape will remain viable for the foreseeable future.” Robert Amatruda, IDC, in Iain Thomson’s column, V3
Oracle StorageTek Tape Portfolio Entry SL8500 VSM SL3000 SL150 Software Device Management Data Management Tiered Storage Virtualization Encryption T9840 LTOT10000 VLE Best scalability Best reliability and availability Best TCO and investment protection We Make a lot of Stuff!
Oracle Tape Portfolio Investment (last 2 years) More innovation than ever… with more to come! Increased investment in tape R&D Our strongest portfolio ever Innovation from integration of Oracle software with StorageTek hardware
Simplify Tape Management Tape Analytics monitors all your drives and media so you can focus your resources elsewhere Leverage Intelligent Analytics Oracle’s proprietary algorithms provide proactive health indicators that can be trusted Worry Free Deployment Tape Analytics gathers performance data through the library without ever entering your live data path Grow with Peace of Mind A monitoring application that scales to meet your needs, Tape Analytics supports monitoring multiple globally dispersed libraries from a single interface 17 Making Tape Reliable for the Long term: Proactive Monitoring Oracle StorageTek Tape Analytics Software
LTFS WHAT IS IT? WHAT MAKES STORAGE SELF DESCRIBING? When a file and the index that describes that file are stored together WHAT IS UNIQUE ABOUT SELF DESCRIBING FILES? WHY DO WE WANT SOFTWARE INDEPENDENCE? Freedom to access files without proprietary software WHY DO WE WANT FILES TO BE ACCESSIBLE? TWO REASONS: 1 - Easily share files 2 - Retrieve files in the future INDEX FILE SOFTWARE DEPENDENT INDEX FILE SELF DESCRIBING Self Describing Files are independent of the OS and the software used to create them A SELF DESCRIBING STORAGE FORMAT
LTFS with LTO FILE PARTITION Beginning of Tape End of Tape FILE 1 FILE 2 FILE 3 LTFS Format on Tape FILE INDEX PARTITION INDEX 1 INDEX 2 INDEX 3 FILE Beginning of Tape End of Tape TAR Format on Tape FILE INDEX INDEX 1 INDEX 2 FILE(S) 1 FILE(S) 2… …FILE(S) 2 INDEX 3 FILE(S) 3
Oracle’s Open Format Software First LTFS Driver To Support: StorageTek T10000C, HP LTO-5, & IBM LTO-5 Oracle’s Driver is Free Oracle’s StorageTek LTFS, Open Edition Software So What’s Missing?
Visibility Into All The Files in a Tape Archive How Does LTFS-LE Expose a standard Interface to Tape Libraries? LTFS Library Edition INDEX 1 FILES INDEX 1 INDEX 2 INDEX 3 APPLICATIONS All File Indices Stored Within LTFS-LE Application File Index and Files Stored Locally on Every LTFS Tape POSIX / CIFS / NFS Interface
ACTIVE-ACTIVE CONTROLLERS 2TB DRAM 80 processor cores 10 PCI slots Scales to ~15TB Flash Scales to ~3PB S7420 7120 SINGLE OR DUAL CONTROLLERS 288GB DRAM 16 processor cores 2 PCI slots Scales to ~6TB Flash Scales to ~500TB SINGLE CONTROLLER 48GB DRAM 4 CPU cores 2 PCI slots 73GB of Flash Scales to ~200TB Fastest Growing Major NAS and Unified Storage Vendor ZFS Storage Appliances 7320 7420 ZFS Intelligent Operating System Single storage software. Three unique storage systems.
Engineered for Extreme Performance. Most Horsepower Possible 2TB DRAM 80 cores processing power 4TB Read Flash 10TB Write Flash Dynamic Storage Tiering (HSP) Automated, real-time data migration from DRAM to multi-class flash, to multi-class disk storage. Only software engineered for multi-level flash and disk storage. 4 Write SSDs per Tray (max) 7.2K SAS-2 10/15K SAS-2 10/15K SAS-2
ZFS Storage Performance Benchmarks 8S8S 137,066 68,035 62,261 53,014 Sources: SAN: storageperformance.org NAS: spec.org/sfs2008/ Oracle (7420) – JULY Oracle (7420) 275,000 NetApp (3270) New Oracle (6780)IBM (V7000) SPC-1 OLTP 10.7GB/s 7.4GB/s 4.8GB/s 3.1GB/s 137,066 Oracle (7420) – Oracle (7420) 15GB/s IBM (XIV) New Oracle (6780) IBM (V7000) SPC-2 DSS/OLAP 134,140 101,183 Oracle (7320) 267,000 NetApp (3270) IBM (V7000) SPECsfs 2.5ms response 4.3ms response Oracle (7320)NetApp (3270) Leading Disk Storage Performance in All Three Industry Benchmarks Oracle (7420) 137,066 Oracle (7420) – DECEMBER 2013 325,000 New 137,066 Oracle (7420) – DECEMBER 2013 17GB/s New 400,000 Oracle (7420) – DECEMBER 2013 New Caution: Not for External Use
Based on Solaris Dtrace Enables Storage Admins to see various statistics / measurements in real time Provides drilldown analysis Visibility of who/what is using storage resources Most powerful tool for troubleshooting and resolving bottlenecks Real Time Visibility Power of Storage Analytics
Easy to Use Enterprise Storage: Data and storage services that can be promoted or demoted “on the fly” to increase or decrease performance and priority as business and application priorities change. Scalable and Elastic : Ideal storage platform for virtual infrastructure projects, IT data center consolidation projects, Oracle deployments, and bringing business-critical applications, such as financial and OLTP, online with the highest levels of performance without tradeoffs for capacity utilization. Industry leading Efficiency : Utilization, Performance, and Protection – tied to unique service levels. Consolidate your applications on a single storage platform. Pillar Axiom 600 : Oracle’s Axiom lowers IT costs and speeds up ROI with advanced Quality of Service, simple data management, and industry leading utilization and scalability. Introducing the Pillar Axiom 600 The Core Value Propositions
S7420 Axiom 600 – one model that linearly scales both capacity and performance Up to 4 ACTIVE-ACTIVE SLAMMERS with 64 Bricks 8 Control Units Up to 832 drives Up to 1.6PB Capacity 192GB Cache 128 RAID Controllers SATA, FC, or SSD Storage Classes SINGLE ACTIVE-ACTIVE SLAMMER with ONE BRICK 2 Control Units 13 Drives 12TB Capacity 48GB Cache All models include… Patented Quality of Service (QoS) Software All Protocols: FC, iSCSI, CIFS, NFS All Management Software: Multi-Axiom Management, Application Profiles, Thin Provisioning, Storage Domains, Path Management Software Data Protection and Mobility Software: Replication, Volume Copy, Clones Engineered integration with Oracle software: HCC, OEM, ASM, SAM, OVM 29 Pillar Axiom 600 Storage System
FeatureFunctionBenefits Quality of Service Application prioritization and contention management that enables multiple applications to efficiently co-exist on the same storage system Applications are assigned I/O resources according to their business value and not relegated to ‘first come first served’ Increases overall efficiency and utilization of the storage system Modular Architecture Ability to dynamically scale both performance and capacity by independently adding Slammers (up to 4) and Bricks (up to 64) Maximum performance/utilization regardless of size of configuration. Ability to grow and rebalance the storage pool based on changing business environments Distributed RAID Achieves superior scalability and performance even during drive rebuilds by moving RAID local to storage enclosures Ensures predictable performance scaling with capacity add. Higher reliability by localizing the drive rebuild process to the storage enclosure and reducing RAID rebuild window Top Technology Differentiators
Virtual Server Virtual Machine 2 Virtual Machine 1 Virtual Machine 3 FIFO Queue Virtual Server Virtual Machine 2 Virtual Machine 1 Virtual Machine 3 13 9 45 8 10 7 26 Premium Priority Queue Medium Priority Queue Low Priority Queue PremiumMediumLowPremiumMediumLow Typical Multi-Tier Array Axiom Architecture Prioritizing the I/O Queue; Breaking the FIFO Model 45 8 10 13 79 26 Align The Business Value Of The Application To I/O Performance Levels
Domain 2 Domain 1 Domain 3 Create up to 64 Physical Domains in a Single Axiom Refresh Legacy or Aging Bricks without Disruption Easily and non-disruptively evacuate old drives Isolate Application Data or Workloads to Physical Location Secure a workload to a domain - No co- mingling of data across media Separate User Groups or Departments to Physical Location Secure department or user type/role data to a domain Separate Protocols Isolate SAN and NAS workloads – they can be very different. Extending QoS w/ Storage Domains 32
Pilot Slammer Bricks Modular Scaling with Modular Components
SAM-FS with Axiom Complete Solution with Oracle WebCenter Content Performance Disk Tape Systems WebCenter Content Servers SAM Policies QFS File System Offsite SAM Copies Metadata Server SAM Tape Archive Copy Remote Systems Metadata SSD Primary File System FC SAM Disk Archive Copy SATA SAM Copy 1 & 2 SAM Copy 3 Thousands of SAN clients Hundreds of file systems Billions of files Petabytes of disk cache Exabytes of archive Thousands of SAN clients Hundreds of file systems Billions of files Petabytes of disk cache Exabytes of archive Capacity Disk
Your consent to our cookies if you continue to use this website.