Presentation is loading. Please wait.

Presentation is loading. Please wait.

XenServer Storage Integration Deep Dive. 2 © 2009 Citrix Systems, Inc. — All rights reserved Agenda XenServer 5.5 Storage Architecture Multipathing Vendor.

Similar presentations


Presentation on theme: "XenServer Storage Integration Deep Dive. 2 © 2009 Citrix Systems, Inc. — All rights reserved Agenda XenServer 5.5 Storage Architecture Multipathing Vendor."— Presentation transcript:

1 XenServer Storage Integration Deep Dive

2 2 © 2009 Citrix Systems, Inc. — All rights reserved Agenda XenServer 5.5 Storage Architecture Multipathing Vendor Integration StorageLink

3 3 © 2009 Citrix Systems, Inc. — All rights reserved Enterprise Edition Platinum Edition Free Edition XenServer Essentials Shared Storage (iSCSI, FC, NFS) XenCenter Management 64bit, Windows, Linux Workloads Live Migration (XenMotion) No socket restriction High Availability StorageLink TM Provisioning Services (p+v) Lab Management Performance Monitoring Citrix XenServer & Essentials 5.5 Family Active Directory Integration Generic Storage Snapshotting Provisioning Services (virtual) Stage Management Workload Balancing Workflow Studio Orchestration NEW

4 XenServer 5.5 Storage Architecture

5 5 © 2009 Citrix Systems, Inc. — All rights reserved Expanded Backup Support

6 6 © 2009 Citrix Systems, Inc. — All rights reserved Storage Technologies Storage Repository LUN LVM Volume Group LVM Logical Volume LVM Logical Volume Storage Repository Filesystem.VHD file.VHD file Storage Repository LUN LVM Volume Group LVM Logical Volume LVM Logical Volume VHD header XenServer 5.0 / 5.5 NFS / EXT3 XenServer 5.0 iSCSI / FC XenServer 5.5 iSCSI / FC VM virtual disk

7 7 © 2009 Citrix Systems, Inc. — All rights reserved LVM XenServer 5.5 (LVHD) Replaces LVM for SRs Hosts VHD files directly on LVM volumes Best of both worlds Features of VHD Performance of LVM Adds Advanced Storage features Fast Cloning Snapshots Fast and simple upgrade Backwards compatible

8 8 © 2009 Citrix Systems, Inc. — All rights reserved LVM # pvs PV VGPSize PFree /dev/dm-0 VG_XenStorage-7be61354-3bf1-ed0b-fd09-8a003390b63279.99G 2.36G /dev/dm-1 VG_XenStorage-5e5e9374-edf2-6810-a706-8c94c78589e799.99G 600.00M /dev/dm-2 VG_XenStorage-2dfd2b40-906b-f7b7-863d-e7f5db8454b0436.00M 176.00M /dev/sda3 VG_XenStorage-e3ce61e7-994d-a942-bcd5-44eeea648caf66.85G 7.60G

9 9 © 2009 Citrix Systems, Inc. — All rights reserved VHD See and verify mount point of VHD SR /var/run/sr-mount/ “full provision” VHD SR vhd-util See http://support.citrix.com/article/CTX118842 (NetApp best practice)http://support.citrix.com/article/CTX118842 Check VHD architecture hexdump -vC.vhd | less

10 Multipathing

11 11 © 2009 Citrix Systems, Inc. — All rights reserved Why using Multipathing? Path redundancy to storage Performance increase by load sharing algorithms Many fibre channel environments by default have multiple paths FC HBA 2 FC HBA 1 XenServer FC Switches Storage Subsystem LUN 1 Storage controller 1 Storage controller 2 LUN 1

12 12 © 2009 Citrix Systems, Inc. — All rights reserved Enabling Multipathing xe host-param-set other- config:multipathing=true uuid=host_uuid xe host-param-set other- config:multipathhandle=dmp uuid=host_uuid Note: Do not enable multipathing by other ways (e.g. CLI)!!!

13 13 © 2009 Citrix Systems, Inc. — All rights reserved XenServer supports 2 multipathing technologies Device Mapper Multipathing (DMP)RDAC MPP (mppVhba) Defaultyesno XenServer version>= 5.0 4.1 CLI >= 5.0 Update 2 Management by XenCenteryesno Supportwide storage rangeonly LSI controller based storage Driver / DaemonmultipathdmppVhba driver CLI path checkmultipath –llmpputil Configuration/etc/multipath-enabled.conf/etc/mpp.conf (requires execution of /opt/xensource/bin/update-initrd) See details: http://support.citrix.com/article/ctx118791http://support.citrix.com/article/ctx118791

14 14 © 2009 Citrix Systems, Inc. — All rights reserved DMP vs RDAC MPP Check if RDAC MPP is running lsmod | grep mppVhba „multipath –ll“ would show MD device as output (if DMP is active) Use only 1 technology When RDAP MPP is running use it Otherwise use DMP

15 15 © 2009 Citrix Systems, Inc. — All rights reserved MPP RDAC: Path check mpputil Lun #0 - WWN: 600a0b80001fdf0800001d9c49b0caa1 ---------------- LunObject: present DevState: OPTIMAL Controller 'A' Path -------------------- Path #1: LunPathDevice: present DevState: OPTIMAL Path #2: LunPathDevice: present DevState: OPTIMAL Controller 'B' Path -------------------- Path #1: LunPathDevice: present DevState: OPTIMAL Path #2: LunPathDevice: present DevState: OPTIMAL

16 16 © 2009 Citrix Systems, Inc. — All rights reserved DMP: Path check Monitoring using XenCenter Monitoring using CLI Command: multipath -ll

17 Multipathing & Software iSCSI

18 18 © 2009 Citrix Systems, Inc. — All rights reserved iSCSI with Software Initiator IP addressing to be done by XenServer Dom-0 Multipathing also to be done by XenServer Dom-0 Dom-0 IP configuration is essential NIC 1 XenServer Switches Storage Subsystem LUN 1 Storage LAN Ctrl 1 IP LUN 1 XenServer Dom-0 IP LUN 1 IP

19 19 © 2009 Citrix Systems, Inc. — All rights reserved Best practice configuration: iSCSI storage with multipathing Separation of subnets also on IP base NIC 2 NIC 1 XenServer FC Switches Storage Subsystem LUN 1 Storage LAN Adapter 1 Storage LAN Adapter 2 Port 1 IP: 192.168.1.201 Port 2 IP: 192.168.2.201 Port 1 IP: 192.168.1.202 Port 2 IP: 192.168.2.202 NIC 1 IP: 192.168.1.10 NIC 2 IP: 192.168.2.10 Subnet: 255.255.255.0 Subnet 1 Subnet 2

20 20 © 2009 Citrix Systems, Inc. — All rights reserved Subnet 1 Not recommended configurations for multipathing and iSCSI: NIC 2 NIC 1 XenServer NIC 1 IP: 192.168.1.10 NIC 2 IP: 192.168.1.11 Both server NICs in same subnet Subnet 1 NIC 2 NIC 1 XenServer Team IP: 192.168.1.10 Mixing of NIC teaming and multipathing

21 21 © 2009 Citrix Systems, Inc. — All rights reserved Multipathing with Software Initiator XenServer 5 XenServer 5 supports Multipathing with iSCSI software initiator Prerequisites are: iSCSI target uses same IQN on all ports iSCSI target ports operate in portal mode Multipathing reliability has been enhanced massively in XenServer 5.5

22 22 © 2009 Citrix Systems, Inc. — All rights reserved How to check if iSCSI target operates in portal mode? Execute iscsiadm -m discovery --type sendtargets --portal Output must show alls IPs of the target ports with identical IQN Example: 192.168.0.161:3260,1 iqn.strawberry:litchie 192.168.0.204:3260,2 iqn.strawberry:litchie When connecting to iSCSI target using XenCenter Storage Repository Wizard, also all target IPs should show up after Discovery

23 NetApp Integration

24 24 © 2009 Citrix Systems, Inc. — All rights reserved NetApp Storage NetApp Storage supports Multipathing For configuring NetApp storage and modification of multipath.conf see whitepaper http://support.citrix.com/article/CTX118842 http://support.citrix.com/article/CTX118842 NetApp typically supports portal mode for iSCSI  multipathing for iSCSI SW Initiator is supported Especially for low-end NetApp storage (e.g. FAS2020) with limited LAN adapters special considerations take place

25 25 © 2009 Citrix Systems, Inc. — All rights reserved NetApp low-end storage (iSCSI) Often limited by NIC configuration Example: 2 NICs per head 1 aggregate / LUN is represented by 1 head at a time (other head for fault tolerance) Thus: 2 NICs effectively can be used for storage connection Typically Filer delivers non-block-based protocols (e.g. CIFS) which also require redundancy as well as block based protocols (e.g. iSCSI)

26 26 © 2009 Citrix Systems, Inc. — All rights reserved iSCSI Network NIC 1 NIC 0 NIC 1 Controller 0 (active) CIFS Network NIC 0 NIC 1 Controller 1 (fault tolerance) Example FAS2020: Scenario 1 no network reduncancy for iSCSI and CIFS separation of networks

27 27 © 2009 Citrix Systems, Inc. — All rights reserved Example FAS2020: Scenario 2 network redundancy for iSCSI and CIFS no separation of networks NIC 1 NIC 0 NIC 1 Controller 0 (active) CIFS & iSCSI Network NIC Bond vif / bond NIC 0 NIC 1 Controller 1 (fault tolerance) vif / bond

28 28 © 2009 Citrix Systems, Inc. — All rights reserved Example FAS2020: Scenario 3 network redundancy for iSCSI (multipathing) and CIFS separation of networks NIC 1 NIC 0 NIC 1 Ctrl 1 (active) NIC Bond Vif / bond Controller 1 (fault tolerance) Same configuration CIFS VLAN iSCSI VLAN NIC 3 NIC 2 Multipathing CIFS VLAN iSCSI VLAN

29 Dell / Equalogic Integration

30 30 © 2009 Citrix Systems, Inc. — All rights reserved Dell Equalogic Support XenServer 5.5 includes Adapter (min. firmware 4.0.1 required) Redundant path configuration does not depend on using adapter or not All PS series are supported as running same OS StorageLink Gateway support planned

31 31 © 2009 Citrix Systems, Inc. — All rights reserved Dell / Equalogic See whitepaper for Dell / Equalogic storage http://support.citrix.com/article/CTX118841http://support.citrix.com/article/CTX118841 Each Equalogic has two controllers Only 1 controller is active Uses „Group ID“ address on storage side (similar to bonding / teaming on server side) Only connection over group ID, no direct connection to the iSCSI ports possible Therefore multipathing cannot be used  bonding on XenServer side

32 Datacore Integration

33 33 © 2009 Citrix Systems, Inc. — All rights reserved Multipathing architecture with Datacore Different IQNs for targets – no portal mode possible!! NIC 2 NIC 1 XenServer Switches LUN 1 Storage Controller 1 Storage Controller 2 Port 1 IP: 192.168.1.201 Port 2 IP: 192.168.2.202 NIC 1 IP: 192.168.1.10 NIC 2 IP: 192.168.2.10 Subnet: 255.255.255.0 Subnet 1 Subnet 2 IQN 1 IQN 2

34 34 © 2009 Citrix Systems, Inc. — All rights reserved Datacore hints Special attention for software iSCSI Follow Datacore technical bulletin: TB15 ftp://support.datacore.com/psp/tech_bulletins/Tech BulletinsAll/TB15b_Citrix%20XenServer_config_50 1.pdf ftp://support.datacore.com/psp/tech_bulletins/Tech BulletinsAll/TB15b_Citrix%20XenServer_config_50 1.pdf Datacore in VM O.k. when not using HA Configuration possible, but take care about booting the whole environment Take care when updating XenServer

35 StorageLink

36 36 © 2009 Citrix Systems, Inc. — All rights reserved Logical advancement of XenServer integrated storage adapters Netapp & Equalogic Storage adapter

37 37 © 2009 Citrix Systems, Inc. — All rights reserved Citrix StorageLink Overview (XenServer) Guest Snap-in for XenServer Snap-in for XenServer XenServer StorageLink Data Path Control Path iSCSI / FC

38 38 © 2009 Citrix Systems, Inc. — All rights reserved Leveraging the best of virtualization and storage StorageLink as basis for Citrix Essentials Storage vendor functionalities Quick provisioning Snapshots Quick cloning Thin-provisioning Deduplication Backup and Restore capabilities

39 39 © 2009 Citrix Systems, Inc. — All rights reserved StorageLink Overview XenServer Netapp EqualLogic VSM Bridge Hyper-V VDS SAN/NAS Virtual Storage Manager Parent Partition DOM0 XenServer Data Path Control Path

40 40 © 2009 Citrix Systems, Inc. — All rights reserved StorageLink Gateway Overview Vendor-specific VSM Storage Adapters run in separate processes SMI-S is the preferred method of integration as it requires no custom development work Dell XenServer

41 41 © 2009 Citrix Systems, Inc. — All rights reserved Storage Technologies Storage Repository LUN LVM Volume Group LVM Logical Volume LVM Logical Volume VHD header XenServer 5.5 iSCSI / FC VM virtual disk LUN Storage Repository XenServer 5.5 iSCSI / FC + LUN

42 42 © 2009 Citrix Systems, Inc. — All rights reserved StorageLink Architecture XenServer calls direct ot Array API‘s to provision and adjust storage on deman Full leverage array hardware capabilities Virtual disk drives are individual LUNs Only the server running a VM connects to the individual LUN(s) for that VM A special master server coordinates which servers connect to which LUNs

43 43 © 2009 Citrix Systems, Inc. — All rights reserved Snapshot types XenServer (free)Essentials Enterprise Snapshot typeSoftware based snapshotHardware based snapshot (also software based snapshot possible when not using StorageLink) LUN access modelLVM (1 LUN=x times VDI)LUN-per-VDI (1 LUN = 1 VDI) PerformanceGoodSuperior UtilizationOn XenServer hostOn storage subsystem OverheadLowlowest

44 44 © 2009 Citrix Systems, Inc. — All rights reserved StorageLink: Microsoft look-and-feel

45 45 © 2009 Citrix Systems, Inc. — All rights reserved Essentials: Example usage scenario Effective creation of VMs from template VM Template LUN VM clone 1x 1. Copy of LUN 2. Modification of Zoning 3. Creation of VM 4. Assignment of LUN to VM VM clone 3x 1. Copy of LUN 2. Modification of Zoning 3. Creation of VM 4. Assignment of LUN to VM 1. Copy of LUN 2. Modification of Zoning 3. Creation of VM 4. Assignment of LUN to VM 1. Copy of LUN 2. Modification of Zoning 3. Creation of VM 4. Assignment of LUN to VM Effectiveness: Fast-Cloning using Storage Snapshots Fully automated storage and SAN configuration for FC and iSCSI

46 46 © 2009 Citrix Systems, Inc. — All rights reserved StorageLink: Supported Storages StorageLink HCL http://hcl.vmd.citrix.com/SLG-HCLHome.aspxhttp://hcl.vmd.citrix.com/SLG-HCLHome.aspx

47


Download ppt "XenServer Storage Integration Deep Dive. 2 © 2009 Citrix Systems, Inc. — All rights reserved Agenda XenServer 5.5 Storage Architecture Multipathing Vendor."

Similar presentations


Ads by Google