Presentation is loading. Please wait.

Presentation is loading. Please wait.

Microsoft Software Defined Storage (SDS) Breadth offering and unique opportunity Customer choice SAN and NAS Storage Private Cloud with Partner.

Similar presentations


Presentation on theme: "Microsoft Software Defined Storage (SDS) Breadth offering and unique opportunity Customer choice SAN and NAS Storage Private Cloud with Partner."— Presentation transcript:

1

2

3

4

5 Microsoft Software Defined Storage (SDS) Breadth offering and unique opportunity Customer choice SAN and NAS Storage Private Cloud with Partner Storage Windows SMB3 Scale Out File Server (SoFS) + Storage Spaces Private Cloud with Microsoft SDS StorSimple + Microsoft Azure Storage Hybrid Cloud Storage Microsoft Azure Storage Public Cloud Storage Customer Service Provider Microsoft Consistent Platform ONE

6 Scale-Out File Server Cluster Storage Spaces Hyper-V Cluster SMB Shared JBOD

7 What is it? New file storage solution for server applications Scenarios include Hyper-V virtual machines and SQL Server databases Highlights Enterprise-grade storage: scalable, reliable, continuously available Easy provisioning and management, using familiar Microsoft tools Leverages the latest networking technologies (converged Ethernet, RDMA) Increased flexibility, including live migration and multi-cluster deployments Reduces capital and operational expenses Supporting Features SMB Transparent Failover - Continuous availability if a node fails SMB Scale-Out – Active/Active file server clusters, automatically balanced SMB Direct (SMB over RDMA) - Low latency, high throughput, low CPU use SMB Multichannel – Increased network throughput and fault tolerance SMB Encryption – Secure data transmission without costly PKI infrastructure VSS for SMB File Shares - Backup and restore using existing VSS framework SMB PowerShell, VMM Support – Manageability and System Center support Shared Storage

8

9 Multiple RDMA NICs Multiple NICs Single RSS NIC SMB Server SMB Client Full Throughput Bandwidth aggregation with multiple NICs Multiple CPUs cores engaged when using Receive Side Scaling (RSS) Automatic Failover SMB Multichannel implements end-to-end failure detection Leverages NIC teaming if present, but does not require it Automatic Configuration SMB detects and uses multiple network paths SMB Server SMB Client SMB Server SMB Client Sample Configurations Team of NICs SMB Server SMB Client NIC Teaming Switch 10GbE NIC 10GbE NIC 10GbE Switch 10GbE NIC 10GbE NIC 10GbE NIC 10GbE NIC 10GbE Switch 1GbE NIC 1GbE NIC 1GbE Switch 1GbE NIC 1GbE NIC 1GbE Switch 10GbE/IB NIC 10GbE/IB NIC 10GbE/IB Switch 10GbE/IB NIC 10GbE/IB NIC 10GbE/IB Switch 10GbE RSS

10 SMB Server SMB Client Switch 10GbE NIC 10GbE NIC 10GbE CPU utilization per core Core 1Core 2Core 3Core 4 RSS

11 SMB Server SMB Client Switch 10GbE NIC 10GbE NIC 10GbE SMB Server SMB Client Switch 10GbE NIC 10GbE NIC 10GbE CPU utilization per core Core 1Core 2Core 3Core 4 CPU utilization per core Core 1Core 2Core 3Core 4 RSS

12 1 session, without Multichannel SMB Server 1 SMB Client 1 Switch 10GbE SMB Server 2 SMB Client 2 NIC 10GbE NIC 10GbE Switch 10GbE NIC 10GbE NIC 10GbE Switch 10GbE Switch 10GbE NIC 10GbE NIC 10GbE NIC 10GbE NIC 10GbE RSS No automatic failover Can’t use full bandwidth Only one NIC engaged Only one CPU core engaged

13 1 session, with Multichannel 1 session, without Multichannel No automatic failover Can’t use full bandwidth Only one NIC engaged Only one CPU core engaged SMB Server 1 SMB Client 1 Switch 10GbE SMB Server 2 SMB Client 2 NIC 10GbE NIC 10GbE Switch 10GbE NIC 10GbE NIC 10GbE Switch 10GbE Switch 10GbE NIC 10GbE NIC 10GbE NIC 10GbE NIC 10GbE SMB Server 1 SMB Client 1 Switch 10GbE SMB Server 2 SMB Client 2 NIC 10GbE NIC 10GbE Switch 10GbE NIC 10GbE NIC 10GbE Switch 10GbE Switch 10GbE NIC 10GbE NIC 10GbE NIC 10GbE NIC 10GbE RSS

14 Windows Server 2012 Developer Preview results using four 10GbE NICs Linear bandwidth scaling 1 NIC – 1150 MB/sec 2 NICs – 2330 MB/sec 3 NICs – 3320 MB/sec 4 NICs – 4300 MB/sec Leverages NIC support for RSS (Receive Side Scaling) Bandwidth for small IOs is bottlenecked on CPU

15 1 session, with NIC Teaming, no MC SMB Server 2 SMB Client 1 Switch 1GbE SMB Server 2 SMB Client 2 NIC 1GbE NIC 1GbE Switch 1GbE NIC 1GbE NIC 1GbE Switch 10GbE Switch 10GbE NIC 10GbE NIC 10GbE NIC 10GbE NIC 10GbE NIC Teaming Automatic NIC failover Can’t use full bandwidth Only one NIC engaged Only one CPU core engaged

16 1 session, with NIC Teaming and MC 1 session, with NIC Teaming, no MC Automatic NIC failover Can’t use full bandwidth Only one NIC engaged Only one CPU core engaged SMB Server 1 SMB Client 1 SMB Server 2 SMB Client 2 NIC Teaming Switch 10GbE Switch 10GbE NIC 10GbE NIC 10GbE NIC 10GbE NIC 10GbE Switch 1GbE NIC 1GbE NIC 1GbE Switch 1GbE NIC 1GbE NIC 1GbE SMB Server 2 SMB Client 1 Switch 1GbE SMB Server 2 SMB Client 2 NIC 1GbE NIC 1GbE Switch 1GbE NIC 1GbE NIC 1GbE Switch 10GbE Switch 10GbE NIC 10GbE NIC 10GbE NIC 10GbE NIC 10GbE NIC Teaming

17 SMB Server 2 SMB Client 2 SMB Server 1 SMB Client 1 Switch 10GbE Switch 10GbE R-NIC 10GbE R-NIC 10GbE R-NIC 10GbE R-NIC 10GbE Switch 54GbIB R-NIC 54GbIB R-NIC 54GbIB Switch 54GbIB R-NIC 54GbIB R-NIC 54GbIB 1 session, without Multichannel No automatic failover Can’t use full bandwidth Only one NIC engaged RDMA capability not used

18 1 session, with Multichannel1 session, without Multichannel No automatic failover Can’t use full bandwidth Only one NIC engaged RDMA capability not used SMB Server 2 SMB Client 2 SMB Server 1 SMB Client 1 SMB Server 2 SMB Client 2 SMB Server 1 SMB Client 1 Switch 10GbE Switch 10GbE R-NIC 10GbE R-NIC 10GbE R-NIC 10GbE R-NIC 10GbE Switch 54GbIB R-NIC 54GbIB R-NIC 54GbIB Switch 54GbIB R-NIC 54GbIB R-NIC 54GbIB Switch 10GbE Switch 10GbE R-NIC 10GbE R-NIC 10GbE R-NIC 10GbE R-NIC 10GbE Switch 54GbIB R-NIC 54GbIB R-NIC 54GbIB Switch 54GbIB R-NIC 54GbIB R-NIC 54GbIB

19

20 File Server SMB Direct Client RDMA NIC SMB Direct Ethernet or InfiniBand SMB Server SMB Client Memory NDKPI RDMA NIC RDMA

21 File Client File Server SMB Server SMB Client User Kernel Application Disk R-NIC Network w/ RDMA support NTFS SCSI Network w/ RDMA support R-NIC

22 File Server SMB Direct 1.Application (Hyper-V, SQL Server) does not need to change. 2.SMB client makes the decision to use SMB Direct at run time 3.NDKPI provides a much thinner layer than TCP/IP 4.Remote Direct Memory Access performed by the network interfaces. Client Application NIC RDMA NIC TCP/ IP User Kernel SMB Direct Ethernet and/or InfiniBand TCP/ IP Unchanged API SMB Server SMB Client Memory NDKPI RDMA NIC NIC RDMA 1 2 3 4 1 2 3 4

23

24 Type (Cards*) ProsCons Non-RDMA Ethernet (wide variety of NICs) TCP/IP-based protocol Works with any Ethernet switch Wide variety of vendors and models Support for in-box NIC teaming High CPU Utilization under load High latency iWARP (Intel NE020, Chelsio T4/T5*) Low CPU Utilization under load Low latency TCP/IP-based protocol Works with any Ethernet switch RDMA traffic routable Offers up to 40Gbps per NIC port today* Requires enabling firewall rules Dual 40GbE port performance limited by PCI slot* RoCE (Mellanox ConnectX-3, Mellanox ConnectX-3Pro*) Ethernet-based protocol Works with Ethernet switches Offers up to 40Gbps per NIC port today* RDMA not routable via existing IP infrastructure (Routable RoCE is in the works…) Requires DCB switch with Priority Flow Control (PFC) Dual 40GbE port performance limited by PCI slot* InfiniBand (Mellanox ConnectX-3, Mellanox ConnectX-3Pro*, Mellanox Connect-IB) Switches typically less expensive per port* Switches offer high speed Ethernet uplinks Commonly used in HPC environments Offers up to 54Gbps per NIC port today* Not an Ethernet-based protocol RDMA traffic not routable via IP infrastructure Requires InfiniBand switches Requires a subnet manager (typically on the switch) * This is current as of May 2014. Information on this slide is subject to change as technologies evolve and new cards become available.

25

26 10GbE available in dual port and quad port models 40GbE available in single and dual port models (PCIe bus limits performance for 2x40GbE) Built on TCP/IP and IETF iWARP RDMA Standards Routable –use the solution across the entire data center Compatible with existing Ethernet switch infrastructure SFP+ with twinax passive cables for 10GbE connections QSFP option for the quad port 10GbE or 40GbE Short-reach and long-reach optic modules available Inbox drivers for SMB Direct in Windows Server 2012 R2 Details at http://www.chelsio.com/wp- content/uploads/2011/07/ProductSelector-0312.pdfhttp://www.chelsio.com/wp- content/uploads/2011/07/ProductSelector-0312.pdf

27

28 Install hardware and drivers Get-NetAdapter Get-NetAdapterRdma Configure IP addresses Get-SmbServerNetworkInterface Get-SmbClientNetworkInterface Establish an SMB Connection Get-SmbConnection Get-SmbMultichannelConnection Similar to configuring SMB for regular NICs

29

30

31 # Clear previous configurations Remove-NetQosTrafficClass Remove-NetQosPolicy -Confirm:$False # Enable DCB Install-WindowsFeature Data-Center-Bridging # Disable the DCBx setting: Set-NetQosDcbxSetting -Willing 0 # Create QoS policies and tag each type of traffic with the relevant priority New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3 New-NetQosPolicy "DEFAULT" -Default -PriorityValue8021Action 3 New-NetQosPolicy "TCP" -IPProtocolMatchCondition TCP -PriorityValue8021Action 1 New-NetQosPolicy "UDP" -IPProtocolMatchCondition UDP -PriorityValue8021Action 1 # If VLANs are used, mark the egress traffic with the relevant VlanID: Set-NetAdapterAdvancedProperty -Name -RegistryKeyword "VlanID" -RegistryValue # Enable Priority Flow Control (PFC) on a specific priority. Disable for others Enable-NetQosFlowControl -Priority 3 Disable-NetQosFlowControl 0,1,2,4,5,6,7 # Enable QoS on the relevant interface Enable-NetAdapterQos -InterfaceAlias "Ethernet 4" # Optionally, limit the bandwidth used by the SMB traffic to 60% New-NetQoSTrafficClass "SMB" -Priority 3 -Bandwidth 60 -Algorithm ETS

32

33 Local Single Server Fusion IO SQLIO SMB Client SMB 3.0 + 10GbE (non-RDMA) SMB Server Fusion IO SQLIO 10 GbE SMB 3.0 + RDMA (InfiniBand FDR) SMB Client SMB Server Fusion IO SQLIO IB FDR SMB 3.0 + RDMA (InfiniBand QDR) SMB Client SMB Server Fusion IO SQLIO IB QDR

34 *** Preliminary *** results from two Intel Romley machines with 2 sockets each, 8 cores/socket Both client and server using a single port of a Mellanox network interface PCIe Gen3 x8 slot Data goes all the way to persistent storage, using 4 FusionIO ioDrive 2 cards Configuration BW MB/sec IOPS 512KB IOs/sec %CPU Privileged Non-RDMA (Ethernet, 10Gbps) 1,1292,259~9.8 RDMA (InfiniBand QDR, 32Gbps) 3,7547,508~3.5 RDMA (InfiniBand FDR, 54Gbps) 5,79211,565~4.8 Local5,80811,616~6.6 Configuration BW MB/sec IOPS 8KB IOs/sec %CPU Privileged Non-RDMA (Ethernet, 10Gbps) 57173,160~21.0 RDMA (InfiniBand QDR, 32Gbps) 2,620335,446~85.9 RDMA (InfiniBand FDR, 54Gbps) 2,683343,388~84.7 Local4,103525,225~90.4 Workload: 512KB IOs, 8 threads, 8 outstanding Workload: 8KB IOs, 16 threads, 16 outstanding http://smb3.info

35 File Client (SMB 3.0) Single Server SQLIO File Server (SMB 3.0) SQLIO RDMA NIC Hyper-V (SMB 3.0) File Server (SMB 3.0) VM RDMA NIC SQLIO SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD SAS RAID Controlle r JBO D SSD http://smb3.info

36 Configuration BW MB/sec IOPS 512KB IOs/sec %CPU Privileged Latency milliseconds 1 – Local10,09038,492~2.5%~3ms 2 – Remote9,85237,584~5.1%~3ms 3 - Remote VM10,36739,548~4.6%~3 ms

37 File Server (SMB 3.0) File Client (SMB 3.0) SQLIO RDMA NIC SAS RAID Controller JBO D SSD SAS SAS HBA JBO D SSD SAS SAS HBA JBO D SSD SAS SAS HBA JBO D SSD SAS SAS HBA JBO D SSD SAS SAS HBA JBO D SSD Storage Spaces Workload BW MB/sec IOPS IOs/sec %CPU Privileged Latency millisecond s 512KB IOs, 100% read, 2t, 8o16,77832,002~11%~ 2 ms 8KB IOs, 100% read, 16t, 2o4,027491,665~65%< 1 ms

38 File Server (SMB 3.0) File Client (SMB 3.0) SQLIO RDMA NIC SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD Storage Spaces SAS SAS HBA JBOD SSD 8KB random reads from a mirrored space (disk) ~600,000 IOPS 8KB random reads from cache (RAM) ~1,000,000 IOPS 32KB random reads from a mirrored space (disk) ~500,000 IOPS ~16.5 GBytes/sec

39

40

41

42 Hyper-V Client Hyper-V Host File Server 2 File Server 1 SAS HBA R-NIC Client Storage Spaces SMB 3.0 Server SMB 3.0 Client Switch5 Switch6 NIC NIC Teaming vSwitch Switch4 Switch1 NIC Router Switch2 Client NIC VM Virtual Machine vNIC vDisk File Share Space File Share Space … … … NIC Switch3 File Server DHCP DC/DNS Management … NIC File Server Cluster JBODs Clients Hyper-V Cluster SAS JBOD 1 SAS Module SAS Module Disk SAS JBOD 2 SAS Module SAS Module Disk SAS JBOD 3 SAS Module SAS Module Disk R-NIC NIC SAS HBA

43 Hyper-V Client Hyper-V Host File Server 2 File Server 1 SAS HBA R-NIC Client Storage Spaces SMB 3.0 Server SMB 3.0 Client Switch5 Switch6 NIC NIC Teaming vSwitch Switch4 Switch1 NIC Router Switch2 Client NIC VM Virtual Machine vNIC vDisk File Share Space File Share Space … … … NIC Switch3 File Server DHCP DC/DNS Management … NIC File Server Cluster JBODs Clients Hyper-V Cluster SAS JBOD 1 SAS Module SAS Module Disk SAS JBOD 2 SAS Module SAS Module Disk SAS JBOD 3 SAS Module SAS Module Disk R-NIC NIC SAS HBA VMs per host Virtual processes per VM RAM per VM R-NICs per Hyper-V host Speed of R-NICs SAS ports per module SAS Speed SAS HBAs per File Server SAS Speed R-NICs per file server, Speed of R-NICs NICs per Hyper-V host Speed of NICs Disks per JBOD Disk type and speed SAS Speed Number of Spaces Columns per space CSV cache config Tiering config Hyper-V hosts Cores per Hyper-V host RAM per Hyper-V host Number of clients Speed of client NICs

44 File Server TCP/IP NIC1F NIC2F SMB Server Other Components NIC Teaming Hyper-V TCP/IP NIC1H NIC2H SMB Client Other Components NIC Teaming Virtual Switch VNIC VM VMNIC TCP/IP Switch 1 Switch 2

45 File Server TCP/IP NIC1F NIC2F NIC3F (RDMA) NIC3F (RDMA) NIC4F (RDMA) NIC4F (RDMA) SMB Direct SMB Direct SMB Server Other Components NIC Teaming Hyper-V TCP/IP NIC1H NIC2H NIC3H (RDMA) NIC3H (RDMA) NIC4H (RDMA) NIC4H (RDMA) SMB Direct SMB Direct SMB Client Other Components NIC Teaming Virtual Switch VNIC VM VMNIC TCP/IP Switch 1 Switch 2 Switch 3 Switch 4

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60 TypeTitleDate/Time Foundation FDN06 Transform the Datacenter: Making the Promise of Connected Clouds a RealityMon 11:00 AM Breakout DCIM-B349 Software-Defined Storage in Windows Server 2012 R2 and System Center 2012 R2Wed 8:30 AM Breakout DCIM-B354 Failover Clustering: What's New in Windows Server 2012 R2Tue 1:30 PM Breakout DCIM-B337 File Server Networking for a Private Cloud Storage Infrastructure in WS 2012 R2Tue 3:15 PM Breakout DCIM-B333 Distributed File System Replication (DFSR) Scalability in Windows Server 2012 R2Wed 5:00 PM Breakout DCIM-B335 Microsoft Storage Solutions in Production EnvironmentsTue 1:30 PM Breakout DCIM-B364 Step-by-step to Deploying Microsoft SQL Server 2014 with Cluster Shared VolumesThu 8:30 AM Breakout DCIM-B310 The StorSimple Approach to Solving Issues Related to Growing Data TrendsMon 3:00 PM Breakout DCIM-B357 StorSimple: Enabling Microsoft Azure Cloud Storage for Enterprise WorkloadsWed 1:30 PM ILL DCIM-IL200 Build Your Storage Infrastructure with Windows Server 2012 R2Wed 8:30 AM ILL DCIM-IL200-R Build Your Storage Infrastructure with Windows Server 2012 R2Wed 5:00 PM ILL DCIM-IL308 Windows Server 2012 R2: Introduction to Failover Clustering with Hyper-VMon 3:00 PM ILL DCIM-IL308-R Windows Server 2012 R2: Intro to Failover Clustering with Hyper-VTue 1:30 PM HOL DCIM-H205 Build Your Storage Infrastructure with Windows Server 2012 R2 HOL DCIM-H321 Windows Server 2012 R2: Introduction to Failover Clustering with Hyper-V

61

62 Come Visit Us in the Microsoft Solutions Experience! Look for Datacenter and Infrastructure Management TechExpo Level 1 Hall CD For More Information Windows Server 2012 R2 http://technet.microsoft.com/en-US/evalcenter/dn205286 Microsoft Azure http://azure.microsoft.com/en-us/ System Center 2012 R2 http://technet.microsoft.com/en-US/evalcenter/dn205295 Azure Pack http://www.microsoft.com/en-us/server- cloud/products/windows-azure-pack

63 www.microsoft.com/learning http://microsoft.com/msdn http://microsoft.com/technet http://channel9.msdn.com/Events/TechEd

64

65

66


Download ppt "Microsoft Software Defined Storage (SDS) Breadth offering and unique opportunity Customer choice SAN and NAS Storage Private Cloud with Partner."

Similar presentations


Ads by Google