Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Hyperscale Collaborative Storage

Similar presentations


Presentation on theme: "Distributed Hyperscale Collaborative Storage"— Presentation transcript:

1 Distributed Hyperscale Collaborative Storage
Product Overview Simone Arvigo MEDIAPOWER,

2 A new type of data is driving this growth
The Big Data Reality Information universe in 2009: - 800 Exabytes In 2020′s: - 35 Zettabytes Information Explosion 2.0: New World. More than just words Everything is on-line. Just read a recent cartoon shows a man with wires hooked into him saying to a friend, “My washing machine just texted me, my whites are done”. Pretty funny, but we’re there today. The future of information and connected-ness has literally come to this. Everything will be connected to everything else, and everything will have an “opinion” (information) for everything else. The amount of information created in the next 2-3 years will dwarf all information generated since the beginning of mankind. This is the information explosion. The “information age” of the past 10 years will be like a firecracker as compared to the supernova of information coming in the next 10 years This is the era of BIG DATA. All this data is being generated by machines – surveillance cameras, web 2.0 sites, system logs, and rich media. All this information is swamping company’s ability to store and process it. Storage systems that store and manage this data must be able to move beyond simple scalability, they must be able to scale exponentially globally across distributed environments and yet enable users around the world to collaborate and update data as if they were in the same room or in the same site A new type of data is driving this growth Structured data – Relational tables or arrays Unstructured data — All other human generated data Machine-Generated Data – growing as fast as Moore’s Law

3 A Paradigm Shift is Needed
Vs. File storage Object Storage Millions of Files Scalability 100’s of Billions of Objects Point to Point, Local Access Peer to Peer, Global Fault-Tolerant Management Self-Healing, Autonomous Files, Extent Lists Information Objects w/ Metadata 75% on average Space Utilization Near 100% The Era of Big Data We are in the era of "Big Data“ Machine generated data and rich unstructured data are a key driving force Companies are losing the data management battle. Cost of ingesting, storing, processing and managing big data is affecting the bottom line “Swimming in Sensors, Drowning in data” Why Not Traditional File Systems? In the world of Big Data, traditional file system and database storage paradigms no longer work File system overhead, scalability limitations, mgmt cost, and configuration complexity have all contributed to an ever increasing TCO Database Scaling is being addressed by the NoSQL movement, but needs a different approach to solve the file system problem A new paradigm is needed which provides a cost effective, scalable distributed solution for big data environments CME is a poster child for Big Data, and Cloud is the optimal solution for Big Data

4 What Big Data Needs Hyper-scale World-wide single & simple namespace Dense, efficient & green High performance versatile on-ramp and off-ramp Geographically distributed Process the data close to where its generated vs. copying vast amount of data to processing Cloud enabling Resiliency with extremely low TCO No complexity Near zero administration Ubiquitous Access Legacy protocols Web Access Big data requires intelligent storage systems that scale in all ways, ease of use, low environmental footprint, and fast and easy on and off ramps. Data must be distributed so users can process it wherever they are located, yet they must be able to be processed close to where it is generated. And last but not least, it must be extremely simple, protected, and be self managing, and be able to be accessed and processed by applications across distributed wide area networks using a wide range of interfaces

5 Storage should improve collaboration
… Not make it harder Minutes to install, not hours Milliseconds to retrieve data, not seconds Replication built in, not added on Instantaneous recovery from disk failure, not days Built in data integrity, not silent data corruption

6 Introducing: DDN Web Object Scaler
Content Storage Building Block for Big Data Infrastructure Industry’s leading scale-out object storage appliance Unprecedented performance & efficiency Built-in single namespace & global content distribution Optimized for Collaborative Environments Geographic location intelligence optimizes access latency Just-in-time provisioning Lowest TCO in the Industry Simple, near zero administration Automated “Continuity of Operations” architecture Introducing the DDN Web Object Scaler (WOS) is the industry leading distributed storage platform to meet the needs of Big Data. WOS is a revolutionary object-based, cloud storage system that addresses the needs of content scale-out and global distribution. At its core is the WOS object clustering system - intelligent software that allows a massively scalable content delivery platform to be created out of small building blocks, enabling the system to start small and easily grow to a multi-Petabyte scale. WOS is a fully distributed system, meaning there are no single points of failure or bottlenecks. This allows the system to scale with each new cloud building block (called nodes) linearly adding to the system’s performance capabilities and storage capacity. WOS nodes are self-contained appliances configured with disk storage and CPU and memory resources. Each node is pre-configured with the WOS software and Gigabit Ethernet network interfaces. A cloud can be created out of any nodes that have Internet Protocol (IP) connectivity to each other, regardless of their physical location. Various capacities and performance configurations of WOS nodes are available. Clouds support heterogeneous node types, allowing tailoring of performance and scale to the needs of the environment. WOS changes the paradigm for storage of big data in several ways: Distributed Hyperscale object storage that not only scales up (inside a local site) but also out Collaborative, so many users across multiple sites can ingest, read, and update files concurrently Low (near zero) administrative overhead since big data administration, maintenance, and scale-out must be automated Data locality & Global Collaboration

7 The WOS initiative Understand the data usage model in a collaborative environment where immutable data is shared and studied. A simplified data access system with minimal layers. Eliminate the concept of FAT and extent lists. Reduce the instruction set to PUT, GET, & DELETE. Add the concept of locality based on latency to data.

8 WOS Fundamentals No central metadata storage, distributed management
Self-managed, online growth & balancing, replication Self-tuning, zero-intervention storage Self-healing to resolve all problems & failures with rapid recovery Single-Pane-of-Glass global, petabyte storage management

9 WOS – Architected for Big Data
Hyper-Scale Global Reach & Data Locality 256 billion objects per cluster Scales to 23PB per cluster Start small, grow to tens of Petabytes Network & storage efficient Up to 4 way replication Global collaboration Access closest data No risk of data loss Resiliency with Near Zero Administration Universal Access This slide can be used to summarize WOS if you have limited time Hyperscale quadrant WOS nodes are self-contained appliances configured with disk storage and CPU and memory resources. Each node is pre-configured with the WOS software and Gigabit Ethernet network interfaces. A cloud can be created out of any nodes that have Internet Protocol (IP) connectivity to each other, regardless of their physical location WOS Nodes are available in two versions, WOS 1600 – a 3U 16 drive node that provides up to 48TB (3T Drives) of SAS/SATA storage per tray WOS 6000 – a 4U 60 drive 2-node tray that provides up to 180TB of SAS/SATA storage per tray A rack containing 11 WOS 600o trays contains 2PB of storage per rack, WOS 2.0 limits – will be extended in each release Billion objects per cluster, scales to 23 PB per cluster. Easy to extend, grow brick by brick. Efficiency is key, from a density, power, disk utilization, and I/O perspective Global Reach & Data Locality quadrant WOS is architected from the ground up to be distributed across multiple sites. Data is replicated across up to 4 sites (extended in upcoming releases) based on policy. While replication is typically used for data protection & DR, with WOS, it enables something even more powerful.. Data locality and global collaboration. WOS maintains intelligence regarding the location of objects so Users at each replica site always access data closest to them (latency). Users can collaborate (ingest, access & update data) at local speeds across multiple locations. WOS is uniquely architected to enable local data access and global collaboration Resiliency with Near Zero Administration WOS Self healing provides completely automated data protection and reduces or eliminates service calls. WOS resiliency and administration is superior to competitive approaches in severa1 ways: All drives fully utilized – Any free capacity on any drive is part of the spare pool 50% shorter re-balance times – Only actual data is copied Faster recovery times increase overall performance and reduce risk of data loss Drive failures decrease overall capacity only by the size of the failed drives Total capacity may be restored by replacing drives during scheduled maintenance All combined, WOS administrative simplicity, self healing capabilities, ease of provisioning & deployment provide the lowest TCO in the industry. Universal Access WOS provides a NAS, cloud, and Native API interfaces to enable WOS benefits to be available to a wide range of applications. NAS Protocol Gateway Used for internal POSIX apps that want to utilize internal cloud storage Efficiently utilizes WOS cloud storage with easy provisioning/de-provisioning Federates across multiple sites and provides collaboration & data locality services Cloud Storage Platform Targeted at cloud service providers or private clouds Enables S3-enabled apps to use WOS storage at a fraction of the price Supports full multi-tenancy, bill-back, geo-replication, encryption, and per-tenant reporting Native Object interface WOS quickly integrates in to existing and new environments using the simple but powerful WOS native storage interface. Available API Commands include: PUT object, GET object, DELETE object, RESERVE Object ID Access WOS via standard tools including REST, Java, PHP, Python and C++. Self healing All drives fully utilized 50% faster recovery than traditional RAID Reduce or eliminate service calls

10 WOS & the Big Data Life Cycle
WOS is an intelligent, scalable object store ideal for both high-frequency transactions as well as content archiving and geo-distribution High Performance Distribution & Long Term Preservation Content Distribution Real Time Processing Content Growth Rates Access Day 1 30 Days 90 Days 1 Year 2 Years 5 Years n Years WOS delivers high performance WOS delivers automatic replication & geo-distribution WOS delivers low TCO & massive scalability

11 Distributed Hyperscale Collaborative Storage Global View, Local Access
40 ms 80 ms 10 ms Los Angeles Latency Map Replicate & collaborate (ingest, access & update data) at local speeds across multiple locations 30 ms 80 ms 10 ms Madrid Latency Map Data locality & Global Collaboration Key Features Replication across up to 4 sites Geographic, location, & latency intelligence Data local speeds even using NAS protocols Data and DR protected Key Benefits Users can access and update data simultaneously across multiple sites Increases performance & optimizes access latency No risk of data loss © 2011 DataDirect Networks. All rights reserved 11

12 WOS: Distributed Data Mgmt.
Application returns file to user. A file is uploaded to the application or web server. A user needs to retrieve a file. The WOS client automatically determines what nodes have the requested object, retrieves the object from the lowest latency source, and rapidly returns it to the application. Application makes a call to the WOS client to read (GET) the object. The unique Object ID is passed to the WOS client. The WOS client returns a unique Object ID which the application stores in lieu of a file path. The application registers this OID with the content database. Application makes a call to the WOS client to store (PUT) a new object The WOS client stores the object on a node. Subsequent objects are automatically load balanced across the cloud. App/Web Servers OID = 5718a OID = 5718a Database The system then replicates the data according to the WOS policy, in this case the file is replicated to Zone 2. LAN/WAN Zone 1 Zone 2

13 WOS Building Blocks WOS 6000 - 4U high density 60-drive
WOS U high-performance 16-drive Key Metrics Built on the DDN industry leading high performance storage platforms 4 GigE connections per node Highest density and scalability in the market 1.98PB per rack, Up to 23PB per cluster 660 spindles per rack 22B objects per rack, 256B objects per cluster 99% storage efficiency for any mix of file sizes between 512 bytes to 500GB Linear cluster performance scaling 4 1-Gige ports per node Low latency One disk I/O per read or write for objects < 1MB WOS 6000 4U, 60-drive WOS Node (SAS/SATA) 4U, 60-drive WOS Node (SAS/SATA) 2PB / 11 Units per Rack WOS 1600 Single Global Namespace: eliminates complexity Massive Scalability: in both performance & capacity Unrivaled Simplicity: translates directly to TCO Self-Healing: zero intervention to recover from failures Replication Ready: distribute data globally Collaboration: data locality with single global view Disaster Recoverable: for uninterrupted transactions 3U, 16-drive WOS Node (SAS/SATA) 544TB / 15 Units per Rack © 2011 DataDirect Networks. All rights reserved 13

14 ..… WOS Under the Hood WOS Tray Components WOS Software
Clients Clients/Apps Other WOS Nodes WOS Node / MTS Clients Clients/Apps WOS-Lib HTTP/REST 4 GigE WOS Tray Components Processor /controller motherboard WOS Node software SAS or SATA Drives (2 or 3TB) WOS Software Services I/O requests from clients Directs local I/O requests to disk Replicates objects during PUTs Replicates objects to maintain policy compliance Monitors hardware health WOS API WOS API Node OID Map This is the main WOS node-side process. It’s responsible for the following: Establishing & maintaining a connection to the “cluster” via network connection to the Cluster Manager (MTS) Servicing I/O requests from clients Directing local I/O requests to disk via the Local Data Store (detail on next slide) Replicating objects during PUTs Replicating Object Groups (ORGs) to maintain policy compliance Aggregating node-side statistics for communication back to MTS Monitoring hardware health via IPMI or SES Though “Network Services” is depicted as a single component in this diagram, this is actually a set of components that is instantiated for each of the distinct communication channels. It provides TCP/IP-based socket setup, tear-down, heart-beats, reconnect-logic, message formatting, message handlers, etc. ..… Array Controllers 2 or 3TB drives

15 Intelligent WOS Objects
Sample Object ID (OID): ACuoBKmWW3Uw1W2TmVYthA Full File or Sub-Object User Metadata Key Value or Binary Policy Signature Checksum A random 64-bit key to prevent unauthorized access to WOS objects Eg. Replicate Twice; Zone 1 & 3 Robust 64 bit checksum to verify data integrity during every read. Object = Photo Tag = Beach thumbnails © 2011 DataDirect Networks. All rights reserved 15

16 Efficient Data Placement
WOS “Buckets” Contain objects of similar size to optimize placement WOS Object “Slots” Different sized objects are written in slots contiguously Slots can be as small as 512B to efficiently support the smallest of files. The Result: WOS eliminates the wasted capacity seen with conventional NAS storage

17 Object Storage vs. File System Space Utilization
WOS utilizes an average of 25% more of the available disk space than does SAN/NAS file systems 1PB deployment, stranded space totals 250TB of space, which add $50K-$100K of system cost, as well as ongoing power & space costs WOS eliminates stranded capacity Inherent in SAN/NAS File System storage

18 500TB Comparison – Total TCO Overview
WOS – The TCO Leader WOS by the TCO Numbers WOS annual operating costs are less than one third of S3 costs WOS total TCO is 50% S3 TCO over a 3 yr period First year includes WOS acquisition and deployment costs Follow on years include WOS storage growth and management costs 500TB Comparison – Total TCO Overview Moving an existing Amazon Web Services workload to an internal / private cloud with WOS storage can save 50%+ in TCO costs over 3 years © 2011 DataDirect Networks. All rights reserved 18

19 WOS Advantages Simple Administration
Designed with a simple, easy-to-use GUI “This feels like an Apple product” Early customer quote © 2011 DataDirect Networks. All rights reserved 19

20 WOS Deployment & Provisioning
WOS building blocks are easy to deploy & provision – in 10 minutes or less Provide power & network for the WOS Node Assign IP address to WOS Node & specify cluster name (“Acme WOS 1”) Go to WOS Admin UI. WOS Node appears in “Pending Nodes” List for that cluster San Francisco New York London Tokyo Simply drag new nodes to any zone to extend storage Drag & Drop the node into the desired zone Assign replication policy (if needed) NoFS Congratulations! You have just added 180TB to your WOS cluster!

21 Intelligent Data Protection RAID Rebuild vs WOS Re-Balance
FS + RAID 6 Web Object Scaler Re-Balance RAID Spares Immediate Service Call x Capacity Available: 118TB Capacity Available: 120TB Capacity Available: 116TB Rebuild x x x x Optional Scheduled Service Call Restores Capacity x x WOS (Replicated or Object Assure) Traditional RAID Storage RAID Rebuilds Drives Lost capacity - Spare drives strand capacity Long rebuild times - Whole drive must be rebuilt even though failed drive only partially full Higher risk of data loss – if spare drive is not available, no rebuild can occur Increased support costs - immediate service call is required to replace low spares condition Reduced write performance- RAID reduces disk write performance, especially for small files WOS Re-Balances Data Across Drives All drives fully utilized – Any free capacity on any drive is part of the spare pool 50% shorter re-balance times – Only actual data is copied Faster recovery times increase overall performance and reduce risk of data loss Drive failures decrease overall capacity only by the size of the failed drives Total capacity may be restored by replacing drives during scheduled maintenance Drive failure causes objects to be copied from replica or corrected data to other existing drives

22 Native Object Store interface
WOS Accessibility NAS Gateway Scalable to multiple gateways DR protected & HA Failover Synchronized database across remote sites Local read & write cache LAN or WAN access to WOS Federates across WOS & NAS Cloud Storage Platform Targeted at cloud service providers or private clouds Enables S3-enabled apps to use WOS storage at a fraction of the price Supports full multi-tenancy, bill-back, and per-tenant reporting NAS Protocols (CIFS, NFS, etc) Cloud Platform S3 compatibility Native Object Store interface NAS Gateway CIFS/NFS protocols LDAP/AD Support Scalable HA & DR Protected Migration from existing NAS Cloud Store Platform S3-Compatible & WebDAV APIs Multi-tenancy Reporting & Billing Remote storage, file sharing, and backup agents Native Object Store C++, Python, Java, PHP, HTTP REST interfaces PUT, GET, DELETE object, RESERVE ObjectID, etc © 2011 DataDirect Networks. All rights reserved 22

23 Cloud & Service Provider Tools
Private or Internal Cloud Hosted Managed Service Providers Public Clouds Internal Cloud Customer Medium to large multi-site enterprise Provides services to internal BU’s Lowers costs by optimizing utilization of CPU & storage Transfer EC2& S3 workload in-house to improve security & lower costs Bill-back internal departments for services Managed Service Provider Provides hosting services for a few large customers Hosts at local site or third party data center May share some resources across multiple customers Extremely security conscious Public Cloud Shares resources across many customers Hosts at third party data centers Subscription pricing for CPU, storage, & network usage Offers lowest CAPEX, Subscription pricing Opportunities & Case Studies Enterprise IT wants to move AWS workloads in-house because of security and IP, record control concerns and to reduce costs. Examples of these types of opportunities are the Financial, Pharma, Life sciences, and state/federal government. Remote site support and data locality are important for these custiomers Hosted clouds /MSPs opportunities are large enterprises who want to farm out IT operations to reduce costs while still maintaining security and location control. Hosting providers /MSPs are able to optimize resource utilization by taking over complete IT operations for a relatively small number of companies and using virtual processing and virtual storage/thin provisioning to reduce costs. Examples of hosted environments include regional, citywide / municipal, airports surveillance, as well as commercial central site and branch office surveillance, and home surveillance. Public cloud customers are fairly well known and include operations for SME and smaller commercial entities, as well as LE’s for low security processing. Service Providers Common Needs: DR, Multi-tenancy, Data Locality, Standard Interfaces, Low TCO

24 WOS Multi-Protocol Gateway
Data Center 2 NFS / CIFS Gateway for in-house IT & private clouds Optimizes both multi-site collaboration & data locality HA failover and DR protected Provides NAS access for POSIX FS Applications Standard NFS/CIFS Protocol Access with LDAP integration NAS data migration capabilities Processing Locality 1 Multi Protocol Gateway NFS. CIFS, ftp Metadata store WOS Data Center 1 Processing Locality Replication & Multi-site Collaboration Data Center 3 Reduces costs – provide a single solution for end user incremental storage, file sharing and backup using existing IT infrastructure.  Addresses security and compliance needs – offer a fully encrypted private cloud with access controls. Improve end user satisfaction – quickly deploy and provision unlimited amounts of storage. Improves efficiency – enable advanced services such as secure sharing and collaboration. Ubiquitous access – access anytime, anywhere using desktop, Web or smartphone. Automates backup and archive processes – use Mezeo auto-synchronization and geo-location/ geo-replication, along with Mezeo Ready partner solutions, to enable backup, archive and disaster recovery.    Processing Locality © 2011 DataDirect Networks. All rights reserved 24

25 Failure recovery - Data, Disk or Net
Get Operation – Corrupted with Repair WOS-Lib selects replica with least latency & sends GET request Node in Zone “San Fran” detects object corruption WOS-Lib finds next nearest copy & retrieves it to the client app In the background, good copy is used to replace corrupted object in San Fran zone Get Operation WOSLib selects replica with least latency path & sends GET request Node in Zone “San Fran” returns object A back to application Operation: GET “A” 40 ms 80 ms 10 ms Latency Map Client App WOS-Lib WOS Cluster Group Map 1 2 2 3 Zone San Fran WOS Nodes WOS Nodes Zone New York WOS Nodes Zone London A 4 . . . X A A A Best viewed in presentation mode

26 Geographic Replica Distribution
1 PUT with Asynchronous Replication WOSLib selects “shortest-path” node Node in Zone “San Fran” stores 2 copies of object to different disks (nodes) San Fran node returns OID to application Later (ASAP) Cluster asynchronously replicates to New York & London zones Once ACKs are received from New York & London zones, extra copy in San Fran zone is removed Client App 40 ms 80 ms 10 ms Latency Map WOS-Lib WOS Cluster Group Map A 2 3 4 Zone San Fran WOS Nodes WOS Nodes Zone New York WOS Nodes Zone London . . . 1 A A A Best viewed in presentation mode

27 Multi-site Post-Production Operation Data Locality & Collaboration
LA site user edits video “A”, which replicates to Mexico City & Madrid based on policy MP Gateway immediately synchronizes metadata DB with Madrid user Madrid user requests video “A” for processing, WOS-Lib selects Madrid site (lowest latency) & retrieves for the user The Madrid user extracts frames from the video & writes to WOS (new object), which replicates to Mexico City & LA Los Angles Mexico City Madrid NAS Gateway WOS-Lib 40 ms 80 ms 10 ms Los Angeles Latency Map Real Time Editing App 30 ms 80 ms 10 ms Madrid Latency Map 2 Los Angles User 1 4 4 3 3 Zone Los Angeles The example shows how WOS can be used to allow for collaboration and synchronization in real-time. Editing stations in Los Angeles and Madrid are making a last minute changes to a movie trailer set for a Grand Opening Premier tonight in Mexico City. The LA team makes their last changes and they are committed to the cloud. Immediately they are replicated to all zones and Madrid finished adding in the language subtitles and performs Q.C. checks. Final product is committed back to the cloud where Mexico City team access latest real-time update for tonight's premier. No waiting for facilities to down load via ftp or plane flights with new reels / disk of content. Data is delivered as needed. Allowing for last minute changes while keeping the opening premier on time, on schedule and ensuring the project stays on budget. . . . B B B WOS-Lib A A A A NAS Gateway A Real Time Editing App Zone Mexico City Zone Madrid Cluster “Acme WOS 1” Madrid User Best viewed in presentation mode

28 iRODS Integration iRODS, a rules oriented distributed data management application meets WOS, an object oriented content scale-out and global distribution system WOS Petabyte Scalability: Scale out by simply adding storage modules Unrivaled Simplicity: Management simplicity translates directly to lower TCO Self-Healing: Zero intervention required for failures, automatically recovers from lost drives Rapid Rebuilds: Fully recover from lost drives in moments Replication Ready: Ingest & distribute data globally Disaster Recoverable: For uninterrupted transactions no matter what type of disaster occurs File Layout: Capacity and performance optimized Object Metadata: User-defined metadata makes files smarter Rules oriented application meets object oriented storage

29 WOS + IRODS is the simple solution for Cloud Collaboration
WOS is a flat, addressable, low latency data structure. WOS creates a “trusted” environment with automated replication. WOS is not an extents based file system with layers of V- nodes and I-nodes. IRODS is the ideal complement to WOS allowing multiple client access and an incorporation of an efficient DB for metadata search activities.

30 Some iRODS Examples NASA & iRODS U.S. Library of Congress
Jet Propulsion Laboratory Selected for managing distribution of Planetary Data MODUS (NASA Center for Climate Simulation) Federated satellite image and reference data for climate simulation U.S. Library of Congress Manages the entire digital collection U.S. National Archives Manages ingest and distribution French National Library iRODS rules control ingestion, access, and audit functions Australian Research Coordination Service Manages data between academic institutions

31 Surveillance to the Cloud Case Study Eliminating Cost
Before Classic CCTV with NVRs x After Distributed Multi-site Surveillance CCTV Site Surveillance Multiple NVRs Multiple islands of iSCSI storage Clip reviewing software costs Admin & support costs Cloud Surveillance with WOS Centralized Video Review Centralized Storage IP Cameras Eliminates admin costs In a 6400 camera surveillance deployment, a WOS-based IP camera solution was 33% cheaper to deploy and reduced TCO by 30-50%. © 2011 DataDirect Networks. All rights reserved 31

32 WOS + iRODS: YottaBrain Program
Each container: 5 PB of WOS WOS clusters federated with iRODS

33 WOS Cloud Storage Advantages
The World’s Leading Object Storage Appliance Single Global Namespace for billions of files Fast, efficient content delivery - automated policy-based multi- site replication to the network edge World-Leading File Write and Read Performance Ability to grow non-disruptively in small increments to massive scale with leading energy and space efficiency Single management interface for a global WOS cloud Distributed, self-healing content infrastructure without bottlenecks or single points of failure. © 2011 DataDirect Networks. All rights reserved 33 33

34 Thank You


Download ppt "Distributed Hyperscale Collaborative Storage"

Similar presentations


Ads by Google