We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byCali Danis
Modified over 2 years ago
1 © Violin Memory, Inc. 2014 Product Deep Dive Violin Memory 6000 Series
2 © Violin Memory, Inc. 2014 MORE APPLICATIONS MORE DEVICES MORE USERS 10101001101011011001100111001110010 10100001000101011001001010101111010 0 Compute 10101001101011011001100111001110010 10100001000101011001001010101111010 0 Network 1010100110101101100110011100111001010100001000101011010101 11 Storage Real time, concurrent data access, heavily virtualized infrastructure Multi-Core Compute that is I/O Starved, CPU waiting for I/O Storage Must Deliver High Random IOPS & Low LatencyStorage Must Deliver High Random IOPS & Low Latency More Demand for Data, Now!
3 © Violin Memory, Inc. 2014 Short stroking Wide striping Adding SSD to legacy array Host side read cache “FAST” “Easy Tier” How Do You Make Storage Go FAST? High Acquisition CostsHigh Acquisition Costs Higher Operational CostsHigher Operational Costs High Acquisition CostsHigh Acquisition Costs Higher Operational CostsHigher Operational Costs
4 © Violin Memory, Inc. 2014 ALWAYS AVAILABLEAMAZINGLY ECONOMICALINSANELY POWERFUL Best Performance Value Lower Infrastructure Costs Eliminate IO bottlenecks Drastically reduce latency Full Redundancy Built-in Fully Hot Swappable Engineered For Flash 4 6000 Series Flash Array
5 © Violin Memory, Inc. 2014 Specification 621262246232626466066616 VIMM Count & VIMM Capacity24x 512GiB24x 1TiB64x 512GiB64x 1TiB24x 256GiB64x 256GiB Form Factor / Flash type3U / Capacity (MLC)3U / Performance (SLC) Raw Capacity (TiB / TB)12 / 1324 / 26 32 / 3564 / 706 / 6.516 / 17.5 Usable Capacity (TiB @ 84% / 65%)6.5 / 513 / 1020 / 15.540 / 313 / 2.510 / 7.5 I/O Connectivity8Gb FC, 10GbE iSCSI, 40 Gb IB, PCIe G2 Maximum 4KB IOPS (Mixed)200K IOPS350K IOPS500K IOPS 750K IOPS 450K IOPS1M IOPS Maximum Bandwidth (100% Reads)1.5GB/s2GB/s4GB/s 3GB/s4GB/s Nominal Latency500 µsec (mixed)250 µsec (mixed)
6 © Violin Memory, Inc. 2014 NO MORE “IO WAIT” 1 Million IOPS, Latency in μsec FAST BY DEFAULT No Tuning Needed Insanely Powerful. YSGet Your Storage on Moore’s Law Curve SUSTAINED EXTREME PERFORMANCE Scale without Fear
7 © Violin Memory, Inc. 2014 Architecture Fundamentals: Violin Memory OS (vMOS) System Operations -Web, CLI, REST System Management -Storage virtualization -Hardware acceleration -Multi-Level Flash Optimization Data Management -Snapshots, Clones -Thin Provisioning -Encryption -Deduplication* -Replication*
8 © Violin Memory, Inc. 2014 vMOS – Violin Memory Operating System SYSTEM OPERATIONSSYSTEM MANAGEMENT DATA MANAGEMENT System-wide wear leveling Self-healing, integrated RAID Multi-level wide striping Die and block failure handling Efficient garbage collection LUN Management Multi-Pathing High-Availability Clustering Proactive health monitoring SNMP, CLI, UI, REST API Snapshots Clones Full-disk encryption Thin Provisioning Space management
9 © Violin Memory, Inc. 2014 Engineered For Performance & Reliability
10 © Violin Memory, Inc. 2014 Engineered For Performance & Reliability Flash memory fabric -Heart of the system -4x vRAID Control Modules (VCM) Array control modules -Fully redundant -Controls flash memory fabric -System level PCIe switching Active/Active memory gateways -Storage virtualization -LUN configuration IO modules -FC, 10GE, IB, PCIe Interfaces M EMORY G ATEWAYS IO M ODULES F LASH M EMORY F ABRIC 24 to 64 Hot Swappable VIMMs A RRAY C ONTROL M ODULES
11 © Violin Memory, Inc. 2014 Multi-Level Redundancy – Hot-Swap Anything Fans (x6) Power Supply (x2) VIMM (60+4 hot spares) vRAID Controllers (x4) Array Controllers (x2) Memory Gateways (x2)
12 © Violin Memory, Inc. 2014 Flash Memory Fabric – Up to 1 Million IOPS Up to 64 Violin Intelligent Memory Modules -PCIe connected -Fully hot swappable -4 global spares 4 Active-Active vRAID Control Modules Fabric level flash optimization -vRAID patented algorithms -Dynamic wear leveling -Multi-level Error Correction Code -Hardware based garbage collection Performance optimization -Dynamic data wide stripping -Flash erase hiding -VIMM failure protection Flash Memory Fabric VCM Memory Gateway 01 10 01 10 01 10 01 10
13 © Violin Memory, Inc. 2014 No SSDs ─ Violin Intelligent Memory Modules Core building block of the Memory Fabric -256 GB SLC Flash -512 GB / 1024 GB MLC Flash -3GB to 8GB DRAM All flash metadata & write I/O buffering Hot Swappable Proprietary flash endurance & wear leveling extending Flash life up to 10x -Continuous data scrubbing -Advanced hardware based ECC -Automated in-place die failure handling
14 © Violin Memory, Inc. 2014 System Level Automatic Data Placement Optimization By default, each VCM controls 15 VIMMs -3 VIMM Protection Groups -Each comprising 5 VIMMs Data is dynamically placed on VIMMs Example of an incoming 4KB write -Received by MG -Forwarded to a VCM -4KB split in (4*1KB + 1 Parity) writes across 5 VIMMs in a protection group Any VIMM failure triggers activation of a VIMM global spare and vRAID rebuild VCM VIMM VCM VIMM Protection Group P MG 4KB Write
15 © Violin Memory, Inc. 2014 Every LUN Capable of Up to 1Million IOPS, By Default Full system bandwidth available for every LUN Automatic multi-level striping -Gateways to VCMs -VCM wide striping to VIMMs -VIMM wide striping on internal Flash Chips All operations implemented in hardware, at line speed, ensuring lowest levels of latency VCM
16 © Violin Memory, Inc. 2014 Low Level Flash Operations Can Lead to Poor I/O Latency Read latency is low Write is 10x to 20x longer than read Erase is 100x longer than read Read OpsWrite OpsErase Ops SLC25µs250µs1,000µs e-MLC50µs1,500µs5,500µs MLC50µs900µs3,000µs Spike free low latency requires special handling of Erase operations
17 © Violin Memory, Inc. 2014 Write Cliff Affects All Flash Solutions To Some Degree New Write operations get queued behind Erase operations Up to 60% performance drop Real issue is that Erase operations also get in the way of Read operations Mitigating or eliminating the Write Cliff requires special flash management logic Transient Random Write Bandwidth Degradation Source: Nersc “Write Cliff”
18 © Violin Memory, Inc. 2014 Patented Algorithms Deliver Spike Free Low Latency Background garbage collection ensures free pages for all incoming writes Garbage collection implemented in hardware within each VIMM for line rate performance Garbage collection tightly scheduled & orchestrated at the system level to not affect system performance Garbage collection allowed one VIMM per Protection Group at a time VCM
19 © Violin Memory, Inc. 2014 Patented Algorithms Deliver Spike Free Low Latency Background garbage collection ensures free pages for all incoming writes Garbage collection implemented in hardware within each VIMM for line rate performance Garbage collection tightly scheduled & orchestrated at the system level to not affect system performance Garbage collection allowed one VIMM per Protection Group at a time VCM
20 © Violin Memory, Inc. 2014 vRAID Erase Hiding In Action Reads never blocked by garbage collection (vRAID rebuild on remaining 4 VIMMs) System level orchestration enables sustained low latency for mixed workloads VCM P 4KB Read vRAID Rebuild
21 © Violin Memory, Inc. 2014 World Record Breaking Performance June 29, 2010 -TPC-E World Record May 9, 2011 - TPC-C World Record May 23, 2011 - TPC-C World Record June 22, 2011 – File System World Record December 8, 2011 - TPC-C World Record September 12, 2012 – VMmark 2.1 World Record September 18, 2012 – VMmark 2.1 World Record September 27, 2012 – TPC-C World Record October 02, 2012 – VMmark 2.1 World Record November 13, 2012 – VMmark 2.1 World Records (5 of them) http://vmem.com/benchmarks
22 © Violin Memory, Inc. 2014 R EDUCE S TORAGE C OSTS BY 7 X C OMPARED TO D ISK Never Overprovision U NMATCHED O PERATIONAL C OST Plug and play experience N EAR I NSTANT ROI Optimize Server and License Costs Amazingly Economical. Reduce Cost Across Your Infrastructure
23 © Violin Memory, Inc. 2014 Storage Cost Per Application Is What Matters Database Requirements 1TB 20K IOPS & Tier 1 Disk Array - High Performance HDD - $4/Raw GB - 200 IOPS per disk - 146GB per disk Violin Memory 6264 - Flash Memory - $5/Raw GB | $8.5/Usable GB - vRAID - 750k IOPS for any size LUN 1TB 750K IOPS 100 Disks * 146 = 14.6 Raw TB 20K IOPS
24 © Violin Memory, Inc. 2014 Application Owners Pay 7x Less on Violin Database Requirements 1TB 20K IOPS & Violin Memory 6264 Tier 1 Disk Array - 14.6 Raw TB - $4/Raw GB - 1 Usable TB - $8.5 / Usable GB $58,400 For This Database $8,500 For This Database Application Storage Costs is 7x Lower With Violin!
25 © Violin Memory, Inc. 2014 Simple Operations Provision storage and Go! -Select LUN capacity and let vRAID automate placement -No tuning required -Hot swap for non disruptive operations Seamlessly handle performance spikes -Customer example: Rogue full table scans in dba scripts System handled the load spikes and still met core application SLAs Advanced Graphical User Interface -Fully customizable dashboard -Detailed performance statistics -Supported as a vCenter Plug-In
Violin Memory Inc. Proprietary 26
Violin Memory Inc. Proprietary 27
28 © Violin Memory, Inc. 2014 Violin Symphony: Manage PB’s in a Flash! Manage 100’s of Violin flash arrays through a single interface Enable multi-tenancy with role based access control and Smart Groups Share information through custom reports with up to 2 years of historic data Achieve pro-active wellness with advanced health & SLA monitoring Personalize visibility through fully customizable dashboards and gadgets
29 © Violin Memory, Inc. 2014 Eliminate “I/O Wait”; Reduce HW & SW Costs Storage @ the Speed of Memory More Ops/Sec With Less CPU Cores More Ops/Sec with Less DRAM Cache Less Software Licenses I/O Wait CPU Cycle with Magnetic Disk: 80% Wait 20% Work t CPU Cycle with Memory Storage: 5% Wait 95% Work t
30 © Violin Memory, Inc. 2014 VMworld 1 Million IOPS – 2011 vs. 2012 8 Engines, 960 drives 1 Million Read IOPS 5 Racks or 210RU – 32,000 Watts 2 Violin 6616 Memory Arrays 1 VM at 1 Million IOPS (Random R/W Mix) 6 RU (97% less) – 3,600 Watts (90% less)
31 © Violin Memory, Inc. 2014 Bringing the speed of Flash to all your applications T ARGETING A LL A PPLICATIONS Enterprise Applications on Legacy SAN Enterprise Applications on Memory SAN Scale Out Applications D ISRUPTIVE E CONOMICS Reduced Capex Streamlined Opex Ready for Petabyte Scale
32 © Violin Memory, Inc. 2014 Back-up slides
33 © Violin Memory, Inc. 2014 Violin 6264: A New Standard in Performance Economics HIGHER EFFICIENCY BETTER ECONOMICS SAME FOOTPRINT DOUBLE CAPACITY Violin 6232Violin 6264 2X 3X
34 © Violin Memory, Inc. 2014 Violin 6264 Flash Memory Array at a Glance 50% Lower Power 750K IOPS (Peak 70:30) Memory Storage @ Disk $/GB 19nm Process Geometry 64 TiBs of Capacity in the Same 3U Form Factor
35 © Violin Memory, Inc. 2014 Comparing 6264 and 6232 Hardware & Software 6264 requires 250W less than 6232 -1500W for 6264 versus 1750W for 6232 -Result of more power efficient VIMM hardware design 6264 specific hardware improvements -New 1TiB MLC VIMMs -New chassis with better cable management -FC, iSCSI and IB configurations come with a new ACM Internal clustering for vMOS 6 40GE native port – for future use, enabling data movement across arrays -PCIe configuration leverages same ACM as 6232 6264 requires Array Firmware 6.2 and above -Memory Gateway software is equivalent functionality to vMOS 6.0 -Memory Array firmware adds resilience and support for new ACMs and VIMMs vMOS 6.3 will support all 6600 and 6200 Series arrays
36 © Violin Memory, Inc. 2014 6264 Array Control Module with 40Gbps Ports
37 © Violin Memory, Inc. 2014 Back panel view – 6264 FC/iSCSI/IB – New ACM
FlashSystem family 2014 © 2014 IBM Corporation IBM® FlashSystem™ V840 Product Overview.
Removing the I/O Bottleneck with Virident PCIe Solid State Storage Solutions Jan Silverman VP Operations.
© 2014 IBM Corporation IBM FlashSystem John Clifton
© 2014 IBM Corporation 1 Product Overview © 2014 IBM Corporation 1 Title of presentation goes here FlashSystem family 2014 FlashSystem 840/V840 Overview.
© Hitachi Data Systems Corporation All rights reserved. 1 1 Det går pænt stærkt! Tony Franck Senior Solution Manager.
IBM® Spectrum Storage Virtualize™ V V7000 Unified in a nutshell
The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
© 2014 IBM Corporation 1 Flash Landscape and Market Overview © 2014 IBM Corporation 1 Title of presentation goes here FlashSystem family 2014 Flash Landscape.
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Session Agenda Introducing the Serverquarium for 2013.
1 Paolo Bianco Storage Architect Sun Microsystems An overview on Hybrid Storage Technologies.
VVols with Adaptive Flash and InfoSight Analytics 1 Manchester Virtualisation User Group Rich Fenton (Nimble North Senior Systems Engineer)
System Center 2012 R2 Overview
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP 3PAR StoreServ 7000.
1 © 2002 hp Introduction to EVA Keith Parris Systems/Software Engineer HP Services Multivendor Systems Engineering Budapest, Hungary 23May 2003 Presentation.
“Better together” PowerVault virtualization solutions
Tackling I/O Issues 1 David Race 16 March 2010.
1© Copyright 2013 EMC Corporation. All rights reserved. EMC XtremSW Cache Performance. Intelligence. Protection.
1EMC CONFIDENTIAL—INTERNAL USE ONLY Why EMC for SQL Performance Optimization.
Virtualization for Storage Efficiency and Centralized Management Genevieve Sullivan Hewlett-Packard
1 © Copyright 2009 EMC Corporation. All rights reserved. Agenda Storing More Efficiently Storage Consolidation Tiered Storage Storing More Intelligently.
Challenges of Storage in an Elastic Infrastructure. May 9, 2014 Farid Yavari, Storage Solutions Architect and Technologist.
The Industry’s only Unified Flash + Flash Hybrid Storage with Inline De-Duplication Hard Disk at the Speed of Flash.
IBM Storwize v3700 More performance. More efficiency. No compromises.
Scalability Module 6.
Solutions Road Show 2014 March 2014 | India Neeraj Matiyani Director Enterprise Storage Solutions Changing the Economics of Storage: Flash at the Price.
© 2009 Oracle Corporation. S : Slash Storage Costs with Oracle Automatic Storage Management Ara Vagharshakian ASM Product Manager – Oracle Product.
Oracle Storage Overview Tomáš Vencelík – Storage sales leader.
Extending Auto-Tiering to the Cloud For additional, on-demand, offsite storage resources 1.
WS2012 File System Enhancements: ReFS and Storage Spaces Rick Claus Sr. Technical WSV316.
Module – 4 Intelligent storage system
Product Manager Networking Infrastructure Choices for Storage.
System Storage TM © 2007 IBM Corporation IBM System Storage™ DS3000 Series Jüri Joonsaar Tartu.
11 Capacity Planning Methodologies / Reporting for Storage Space and SAN Port Usage Bob Davis EMC Technical Consultant.
Oracle Storage Networking Powered by QLogic Optimized for the Oracle Solaris Platform.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers Example: Apache.
SESSION CODE: BIE07-INT Eric Kraemer Senior Program Manager Microsoft Corporation.
COMPANY AND PRODUCT OVERVIEW Russ Taddiken Director of Principal Storage Architecture.
Cosmos Business Systems & IBM Hellas
PernixData FVP & Architect Storage that is Fast, Scalable and Predictable Frank Brix Pedersen Systems Engineer -
Eric Burgener VP, Product Management A New Approach to Storage in Virtual Environments March 2012.
An Open Source approach to replication and recovery.
Introduction to Exadata X5 and X6 New Features
Enabling Flash In the Enterprise With Symantec and Violin 1 Flash Forward: Enabling Flash In the Enterprise With Symantec and Violin Ryan Jancaitis Symantec.
IBM System Storage™ DS3000 Series
Challenges in Getting Flash Drives Closer to CPU Myoungsoo Jung (UT-Dallas) Mahmut Kandemir (PSU) The University of Texas at Dallas.
Workshop sullo Storage da Small Office a Enterprise Class Presentato da:
1© Copyright 2016 EMC Corporation. All rights reserved. CONNECTIVTY MATTERS FOR STORAGE TECHNOLOGY REFRESH JUNE 2016 STORAGE AREA NETWORKS IP STORAGE NETWORKS.
© 2017 SlidePlayer.com Inc. All rights reserved.