Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nigel Griffiths Power Systems Advanced Technology Support IBM Europe

Similar presentations


Presentation on theme: "Nigel Griffiths Power Systems Advanced Technology Support IBM Europe"— Presentation transcript:

1 Nigel Griffiths Power Systems Advanced Technology Support IBM Europe
AIX Virtual User Group VIOS Shared Storage Pool 6 Reminder of the Basics & Advanced Topics & Demo Nigel Griffiths Power Systems Advanced Technology Support IBM Europe

2 Agenda Info: Benefits, Videos & Blogs, class 101 Reminder Four Frequently Ask Questions SSP phase 5 – Tiers reminder SSP phase 6 – 8 New features Five new-ish free SSP commands from Nigel Under the cover secrets

3 Shared Storage Pool terminology
Cluster = set of co-operating VIOSs Pool = set of LUNs connected to the cluster So every VIOS sees every LUN LU = Logical Unit = SSP virtual disk out of the pool Space allocated in “chunks” of 1MB VIOS vSCSI connection to virtual machines: AIX, Linux, IBMi Special LUN called a Repository disk for cluster config Typically, 1 or 2 GB is size. Can be rebuilt in seconds. SSP commands are simple to use cluster, lscluster, lu, pv, failgrp, lssp & tier (new)

4 Shared Storage Pools = SSP
We are at Phase 6 which means: Six years of development & releases Lots of concepts & features Covering the basics takes much more than 1 hour Pre-Requistes for this SSP session Technical Uni Session by Linda Flanders Or Power Systems SSP Webinars on Phase mins Phase mins Or 1 years+ Hands-On with SSP

5 Nigel’s YouTube Videos
YouTube search: ”Shared Storage Pools Nigel Griffiths” ~16,100 YouTube views in total SSP Intro mins 2600 SSP2 Getting Started mins 721 SSP2 Thin Provisioning Alerts mins 278 SSP3 New Features mins 1200 Looking Around a SSP mins 591 Live Partition Mobility (LPM) with SSP mins 1295 Recover a Crashed Machine's LPAR to Another Machine 25 mins 627 Migrating to SSP & then LPM mins 1028 SSP 4 Concepts mins 2041 SSP 4 Hands On mins 1041 PowerVC with SSP mins 1488 SSP in 3 Commands in 3 Minutes mins 625 SSP Repository is bullet proof mins 522 SSP Remote Pool Copy Activation for Disaster Recovery 23 mins 617 PowerVM VUG : VIOS Shared Storage Pool mins 620 Power Systems VUG 50: VIOS Shared Storage Pool mins 544 VIOS Shared Storage Pool phase mins 269

6 Nigel’s AIXpert Blog topics on SSP
VIOS Shared Storage Pool phase 6 - New Readme Content VIOS Shared Storage Pool phase 6 - New Features Shared Storage Pools - Migrating to New Disk Subsystem Shared Storage Pools question on two pools? Shared Storage Pool 5 - VIOS upgrade status script Shared Storage Pools 5 - advanced lu -list the search continues Shared Storage Pools - Hands-On Fun with Virtual Disks (LU) by Example * Download my SSP tools from this Blog 5000+ Views Shared Storage Pool 4 - Best Practice & FAQ * Updated to cover SSP6 13,660+ Views Shared Storage Pools 4 - Growing the Pool using LUN Expansion Shared Storage Pools 4 - Cheat Sheet * Command summary by example How many Shared Storage Pools in the world? VIOS Shared Storage Pool Single Repository Disk = Not a Problem * VIOS Shared Storage Pool phase 4 - How fast is the disk I/O ? . . .

7 Marketing: VIOS Shared Storage Pools
Enormous reduction in storage man-power Sub-second disk space allocate & connect a VM lu command: create, map, unmap, remove lu level snapshot: create/delete/rollback Easy to live assimilate a local disk VM for I/O boost Autonomic disk mirrors & resilver with zero VM effort Avoid troublesome Linux mirrors and AIX stale mirror hunting Live Partition Mobility ready by default Simple Pool management: pv & failgrp, lssp, lscluster/cluster, alert  HMC logs DR capability to rebuild the whole SSP & a VM quickly HMC GUI for fast SSP disk setup across dual VIOS No more: VIOS slot numbers, Cnn or vhosts Tiers: separate groups of LUNs with different disk properties For mixed SSD/HDD and live migrate VM to different disks As fast as NPIV (takes a little more VIOS CPU but look at the functions you get) Independence from underlying SAN technology & team! 

8 SSP Architecture

9 SSP5 Webinar Architecture Initial setup = EASY
LUNs+Zone & online in VIOSs cluster –create cluster -addnode (of each VIOS) failgrp –create Have a mirrored SSP lu -create -lu xyz -size 64G -vadapter vhost st VIOS 1 sec later: disk online in VM lu -map -lu xyz -vadapter vhost66 1 sec later: dual VIOS 2nd VIOS pv -add more-LUNs

10 SSP5 Webinar Architecture Initial setup = EASY 3 Advanced Functions
LPM by default Adopt old VM into SSP Simple VM recovery

11 Are my disks supported by SSP?
FAQ 1: Supported? Are my disks supported by SSP? If supported by VIOS then they are good for SSP webapp/set2/sas/f/vios/documentation/datasheet.html How do I get Shared Storage Pools 6? SSP6 delivered as VIOS fix packs So just upgrade your VIOS Does SSP support NPIV? No it does not, never will and it is impossible! What a silly question  © Copyright IBM Corporation 2011

12 FAQ 2: Rate in terms of high speed Disks
Quiz Time!! Local VIOS disk via vSCSI LU over SSP SAN LUN over NPIV FC LUN via vSCSI

13 Rate in terms of Disk speed
Shocking, right! Benchmarked at IBM Montpellier to prove it I/O per second Latency Notes: 1 SSP does take a little more CPU than NPIV but SSP has drastically less SysAdmin time 2 NPIV itself has no automatic mirrors & resilver, sub-second create and assign 3 NPIV disk subsystem might have snapshots, simple cloning, thin provisioning 4 VM with single LU/LUN SSP faster SSP  NPIV LUN via vSCSI Local disk via vSCSI

14 SSP5 Tiers - a quick dip

15 Phase 5 – November 2015 SSP Tiers (multiple pools, only better)
9 tiers (think disk grouping - not hierarchy of levels) Systems tier (with meta data) or User tier (data only) failgrp (mirroring) at tier level [was whole pool level] tier -create -tier blue: <list of LUNs> lu -create -lu fred -size 32G -tier blue SSP LU move a LU between tiers Command: lu -move -lu vm42 -dsttier blue SSP LU Resize (grow only, saves admin time) Command: lu -resize-lu vm42 –size 128G SSP + Tier support in HMC Enhanced+ view VIOS alerts escalated to HMC for Tiers

16 What is good Tier use? It is about the underlying disks
Key: Green – Storage Speed Red – Location Blue – Protection(RAS) It is about the underlying disks Fast, medium, slow highlight the LUNs have different performance IBM, HDS, EMC remind yourself of the underlying disk unit vendor and implications of that Prod, in-house, test make sure you know different policies are in place for the important data V7000a, V7000b you know explicitly which disk unit the data is in to reduce risks Room101 & Room102, datacentre1 & datecentre2 clarify where (geographically) the LUNs are placed Remote-mirror, Local-mirror, Un-mirrored separate disk remotely mirrored (failgrp in SSP terms), locally mirrored or not mirrored Bad use of Tiers: rootvg & data VG RDBMS data & logs

17 VIOS tier related command
How to mirror a tier? failgrp -create -tier tiername . . . failgrp -create -tier red -fg mirrorb: hdisk30 hdisk31 How to move a LU between tiers? lu -move -lu luname -dsttier tiername lu -move -lu vm42boot -dsttier red How to grow a tier in size? pv -add -tier tiername pv -add -tier red -fg mirrora: hdisk22 –fg mirrorb: hdisk32

18 SSP6 What’s New?

19 Phase 6 – November 2016 Available with VIOS 2.2.5 – 11th Nov
Increased SSP VIOS nodes from 16 to 24 Fully support the DeveloperWorks SSP Disaster Recovery software RAS (Reliable, Available & Serviceable) 3a Cluster Wide Snap Automation 3b Asymmetric Network Handling 3c Lease by Clock Tick lu -list -attr provisioned=<true/false> HMC further GUI support for SSP  Arrives with HMC 860 HMC Performance & Capacity Metrics (SSP Performance stats)

20 SSP 6 – VIOS nodes Increased SSP VIOS nodes from 16 to 24 In Practice:
With the normal Dual VIOSs = 12 Servers Note: Live Partition Mobility within the 12 Servers Need more? Use more than one SSP This limits the failure domain Largely a Test & Support Statement Large effort for SSP development test They assume worst case extremely high disk I/O rates © Copyright IBM Corporation 2011

21 SSP 6 – 24 VIOS nodes Low I/O rates = no problem
Nigel Notes on SSP Overall disk I/O rates: Low I/O rates = no problem Can go to higher than 24  but it’s not Supported Medium I/O rate SSP = no problem May demand higher VIOS CPU + RAM Use VIOS “part” command to monitor them High or Extreme I/O rate = take care See the VIOS Installation Readme or my notes on it AIXpert blog: VIOS SSP phase6 New Readme Content SYSTEM (metadata) tier on SSD or Flash Monitor whole SSP disk I/O rates (see item 5) © Copyright IBM Corporation 2011

22 VIOS “part” - checks continue to grow
SSP Details

23 VIOS “part” - checks continue to grow
SSP Details

24 2 – Disaster Recovery Available for a few years as DevWorks scripts
Was “at-your-own-risk”  Now Fully Supported On a site failure you can rebuild your SPP Mirror or remote copy SSP LUN’s to other site (using SAN disk subsystem features) Backup SSP config with: viosbr -autobackup Run viosbr -dr to fix up config file to use alternative VIOS nodes, LUN disk names & LU to Virtual Machine mappings. viobr -restore from the fixed backup file New SSP running as on the original site © Copyright IBM Corporation 2011

25 2 – Disaster Recovery Check out the manuals for these viosbr options
viosbr -dr -clustername clusterName -file FileName -type devType -typeInputs name:value [,...] -repopvs list_of_disks [-db] viosbr -autobackup {start | stop | status} [ -type {cluster | node} ] viosbr -autobackup save © Copyright IBM Corporation 2011

26 Phase 6 – RAS: Cluster snap
3a Cluster Wide Snap Automation It seems every PMR’s first response from support is: “Can we have a snap?” Capturing 16 (or now 24) VIOS snaps is tricky clffdc collects snap data from every node in the SSP cluster Single cluster snapshot (csnap), compressed tar, time co-ordinated See manual pages for details Example: all data and priority=medium As padmin: clffdc -c FULL -p 2 Saves to /home/ios/logs/ssp_ffdc/ snap_ _090336_by_full_Med_c33.tar.gz “c33” co-ord counter Expect LARGE files so make /home LARGE

27 Phase 6 – RAS: Asymmetric Network Handling
3b New internal algorithm Normally, a managing node decides who is in the cluster, who gets expelled & who can join Rare condition: cluster manager has a partial network issue i.e. can communicate with some nodes but not all nodes [asymmetric] Now the manager node double checks if the good nodes can talk to the missing nodes Result: during flaky network issues, we get fewer unnecessary expelled nodes = SSP stability While you fix the network. The “managing node” is one of the VIOS & moves around as VIO Servers start & stop

28 Phase 6 – RAS: Lease by Clock Tick
3c Lease by Clock Tick The SSP VIOS’s heartbeat each other to spot an unresponsive node & eventually expel it from the cluster When the VIOS returns it needs to catch up on SSP config and LU meta data, so its information is current In the past: If a user changes the time/date/zone it looks like a massive time jump = data is very out of date & forces an expel & rejoin You had to know to offline the VIOS & client LU’s using: clstartstop -stop … ; change date+time; clstartstop -start … Once at VIOS st service pack you can change the date time at any time 

29 4) lu -list -attr provisioned=<true/false>
If you say mapped when you read the word provisioned you can work out what this does! $ lu -list -attr provisioned=false  Not mapped to any VM POOL_NAME: spiral TIER_NAME: SYSTEM LU_NAME SIZE(MB) UNUSED(MB) UDID test c043c90a91d434178a test efe7e54f79af7af67 test b cebd7357bd658 $ lu -list -attr provisioned=true  is mapped to a VM test de85f8af711f93a7a288 test ec547808deeba80398 $ Unfortunately, it does not say where test3 or test 5 are mounted  later nmap

30 5) HMC SSP on the new Enhanced+ User GUI
DEMO

31 5) HMC SSP on the new Enhanced+ User GUI

32 6) HMC REST API for SSP performance stats
Is you SSP used lightly, moderate or hammered? Difficult to overview SSP I/O stats nmon from 24 VIOSs = hard work to merge! Same LUN has different hdiskNN on each VIOS VIOS might be doing non-SSP disk I/O DEMO

33 6) HMC REST API for SSP performance stats
POWER8 VIOS Sysadmin “home” Server Linux or AIX & python v3 HMC interactions via REST API Logon put Preferences get XML Switch on stats post XML Request filenames get XML Request files get XML Files in JSON (50MB!) Reformat into webpage Googlechart .html/Javascript

34 6) HMC REST API for SSP performance stats
POWER8 VIOS 300 line Python script to handle XML and JSON HMC can provide 3 data types Raw – painful to use (deltas / seconds) Processes last 2+ hours every 5 minutes Aggregated: min, max, averages for different capture rates Also can limit by date/time or row count JSON include five levels of detail: Whole SSP  VIOS node  Tier  Failgrp  Individual disks Stats size+free, read + write: IOPS, KB/s, service-times, error, timeouts Outputs CSV file the processed but simple Ksh to generate the graph webpage.

35 6) HMC REST API for SSP performance stats

36 6) HMC REST API for SSP VIOS level stats

37 Are you interested in trying this 
I am willing to help you (if I get sample data) AIXpert Blog entry Short Term Issues: Blog not finished yet = just a few hints and graphs. Plan to finish this tomorrow or Monday 27th Feb 2017 But there are a few “teething problems” at the moment! Some of the numbers are not right SSP Developers are working on this & hopefully resolved in the next few days/weeks. When the fixes are available, I will add the details to the Blog.

38 Five New Useful SSP commands from Nigel

39 New SSP commands from Nigel
ncluster status of all VIOS nlu improved lu replacement npool storage pool use nmap finds if a lu is online (mapped) on any VIOS nslim copy a fat LU backup to a now THIN LU Hands-up – how many of you have read this AIXpert Blog Shared Storage Pools Hands-On Fun with Virtual Disks (LU) by Example DEMO

40 ncluster – Check the good status of all VIOS?
No State Repos Pool Role ---Upgrade-Status--- Node-Name 1 OK OK OK DBN ON_LEVEL bronzevios1.ibm.com 2 OK OK OK ON_LEVEL silvervios1.ibm.com 3 OK OK OK ON_LEVEL goldvios1.ibm.com 4 OK OK OK ON_LEVEL orangevios1.ibm.com 5 OK OK OK ON_LEVEL redvios1.ibm.com Can take 10 seconds run Ksh, 2 lines based on: cluster -status -verbose UP_LEVEL means not yet all VOIS’s on higher ioslevel DBN= database node – the one that writes to the repository

41 nlu - improved lu replacement, format, ordered
SizeMB UsedMB Used% Type Tier Name % THIN SYSTEM AIX735_b % THICK SYSTEM JUNK2 % THIN SYSTEM blue_backupvg % THIN SYSTEM blue_rootvg % THIN SYSTEM blue_scratchvg % THIN SYSTEM blue_webvg % THIN SYSTEM emerald3 % THIN SYSTEM emerald3 % THICK SYSTEM orange5a % THIN SYSTEM purple3boot % THIN SYSTEM purple3files % THIN SYSTEM ruby32boot % THIN SYSTEM ruby32data1 % THIN SYSTEM vm100 Removed hex ID numbers & mixed in snapshots You can re-order by column name

42 nlu - improved lu replacement, format, ordered
$ nlu –usedmb SizeMB UsedMB Used% Type Tier Name . . . % THIN SYSTEM vm100 % THIN SYSTEM vm91_mirror % THIN SYSTEM vm61c % THIN SYSTEM vm36 % THIN SYSTEM vm61a % THIN SYSTEM AIX735_b % THIN SYSTEM emerald3 % THIN SYSTEM vm18boot % THIN SYSTEM vm23a % THIN SYSTEM vm24a % THIN SYSTEM purple3boot % THIN SYSTEM ruby32boot % THIN SYSTEM vm27b % THIN SYSTEM vm21a 10 lines of ksh, based on lu -list –field nlu  like lu and reorder on a column nlu [-sizemb | -usedmb | -used | -type | -tier | -name (default)]

43 nmap - Is a LU mapped to a vhost somewhere
Ksh, 30 lines uses clcmd to run lsmap cluster wide $ nmap -h Nigel's nmap to find if a LU (virtual disk) is mapped anywhere on the SSP to a LPAR/VM nmap lu-name nmap ALL $ nmap vm21 Seach the SSP for vm21 NODE diamondvios2.aixncc.uk.ibm.com NODE diamondvios1.aixncc.uk.ibm.com NODE greenvios2.aixncc.uk.ibm.com NODE greenvios1.aixncc.uk.ibm.com NODE limevios2.aixncc.uk.ibm.com vhost6:U A V-V4-C8:vm21a.5cc155a78dbc3d1b a70ec0788 NODE limevios1.aixncc.uk.ibm.com vhost7:U A V-V1-C6:vm21a.5cc155a78dbc3d1b a70ec0788 NODE purplevio2.aixncc.uk.ibm.com NODE purplevio1.aixncc.uk.ibm.com NODE rubyvios2.aixncc.uk.ibm.com NODE emeraldvios2.aixncc.uk.ibm.com NODE emeraldvios1.aixncc.uk.ibm.com NODE rubyvios1.aixncc.uk.ibm.com NODE indigovios1.aixncc.uk.ibm.com

44 npool - storage pool use
name = globular Pool Pacific Pool-Size= MB Pool-Used= MB =64.94% Pool-Free= MB =35.06% Allocated to client VMs = MB Allocated compared to Pool=117.46% Used to Allocate Ratio =55.29% Overcommit= MB Ksh, 9 lines, using cluster & lssp No “ntier” command better to use: tier -list -verbose

45 Let talk about Sparse Files
Hands up who knows about then? Open a file A, write 4K, lseek() to 500 KB, write 4k, close it Then how big is the file A? Answers: 500 KB +4 KB = 504 KB 4 KB + 4 KB = 8 KB Make a copy: cp A B How big is file B? 500 KB +4 KB = 504KB 500 KB Zero all through here 4KB 4KB SSP Thick provisioned LU is a full fat file = every block allocated SSP Thin provisioned LU is a sparse file but at a 1 MB chunk size

46 nslim - copy a fat LU backup to a now THIN LU
C code, 96 lines # ./nslim -h Usage: ./nslim (v3) is a filter style program using stdin & stdout         It will thinly write a file (only copy non-zero blocks)         It uses 1MB blocks         If a block is zero-filled then it is skipped using lseek()         If a block has data then it will write() the block unchanged Example:         ./nslim   <AIX.lu   >SSP-LU-name Flags:         -v for verbose output for every block you get a W=write or .=lseek on stderr                 ./nslim -v   <AIX.lu   >SSP-LU-name                  this gives you visual feedback on progress         -t like verbose but does NOT actually write anything to stdout                  this lets you passively see the mix of used and unused blocks                 ./nslim -t   <AIX.lu         -h or -? outputs this helpful message! Warning:          Get the redirection wrong and you will destroy your LU data Very Dangerous

47 nslim - copy a fat LU backup to a now THIN LU
Examples # nslim  -v   <$SSP/VOL1/old99.*    >$SSP/VOL1/new42.* # ./nslim  -t  <$SSP/VOL1/new77.* Processing W W...WWWWWWWWW W WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW.WWWWWWWWWWW.....WWWWWWWWW.....WWWWWWWWWW....WW..WWWWWWWWWWWWWWWWWWWWWWWW WWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWWW…. Very Dangerous

48 nslim v4 – outputs 1 line per GB
# /home/padmin/nslim_v4 -V <TEMP32G.6bfd6b0dd9c975c8cae0f69bf >vm99.32bcf97e86fad24f9dca0400cebe06d3 Counting 1 Written= 980 Skipped= 44 2 Written= 989 Skipped= 35 3 Written= 407 Skipped= Written= 4 Skipped= Written= 881 Skipped= Written= 393 Skipped= Written= 106 Skipped= Written= 0 Skipped= Written= 87 Skipped= Written= 0 Skipped= Written= 149 Skipped= Written= 0 Skipped= Written= 248 Skipped= Written= 0 Skipped= Written= 102 Skipped= Written= 0 Skipped= Written= 279 Skipped= Written= 0 Skipped= Written= 98 Skipped= Written= 0 Skipped= Written= 216 Skipped= Written= 0 Skipped= Written= 7 Skipped= Written= 1 Skipped=1023 Percent Written=20.27% Skipped=79.73% Done

49 Under the covers Shhh. This is bit is secret
Under the covers Shhh! This is bit is secret! Loads more info in the AIXpert blog

50 What is that VIOS filesystem about?
$ df Filesystem blocks Free %Used Iused %Iused Mounted on /dev/hd % % / /dev/hd % % /usr /dev/hd9var % % /var /dev/hd % % /tmp /dev/VMLibrary % % /var/vio/VMLibrary /dev/hd % % /home /dev/hd11admin % % /admin /proc /proc /dev/hd10opt % % /opt /dev/livedump % % /var/adm/ras/livedump /ahafs % /aha /var/vio/SSP/spiral/D_E_F_A_U_L_T_ % - - Never ever create a file in /var/vio/SSP/ … Treat it like /dev !!!

51 LUfile’s & hidden snapshots
Backup LU from VIOS $ df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd % % / /dev/hd % % /usr /dev/hd9var % % /var /dev/hd % 386 1% /tmp /dev/hd % % /home /dev/hd11admin % 5 1% /admin /proc /proc /dev/hd10opt % % /opt /dev/livedump % 4 1% /var/adm/ras/livedump /dev/VMLibrary % 8 1% /var/vio/VMLibrary /dev/fslv % 7 1% /scratch /ahafs % /aha /var/vio/SSP/stellar/D_E_F_A_U_L_T_ % - - /var/vio/SSP/stellar/D_E_F_A_U_L_T_ # cd /var/vio/SSP/stellar/D_E_F_A_U_L_T_ # ls -l total 24 drwxr-xr-x 2 root system 4096 Aug 21 13:43 IM drwxr-xr-x 7 root system 512 Jun 06 21:47 VIOSCFG drwxr-xr-x 2 root system 8192 Oct 03 17:14 VOL1 # PowerVC Deploy Images Dragons! LUfile’s & hidden snapshots

52 PowerVC horrid LU names!
# ls –s /var/vio/SSP/stellar/D_E_F_A_U_L_T_061310/VOL1 total bronze2a.9e2d5cb119e0ee278be433fc39072e64 4 .bronze3a fbcfb2148a08869a bronze4a dd6f bbfabc497b bronze5a.f98b27fc9f947484ff4a66757e764f28 4 .bronze6a.1488d ed51495e3c70bcc6ef57 4 .fred.63a f9f7ffdd851041f4bd1af 4 .volume-ScratchDisk.2c4bfdae192bdecb196399a0e9688c24 4 .volume-boot-8203E4A_10E0A11-vm613cb6f0e0df0841ecbfaff86f2cc385f volume-boot-8203E4A_10E0A11-vm63a44d94e4e1034b45b1da9c75056c6bd volume-boot-8203E4A_10E0A11-vm8560dc473b76fd42a78fe26fb2c6a02e volume-boot-8203E4A_10E0A31-VM64e7f5cbaeb b2ff690daf6c806a volume-boot-8203E4A_10E0A31-vm f28a3463cb124678d0640ae5a volume-boot-8203E4A_10E0A31-vm82a64fc3ca9a91492b8a000c27d20c5b volume-vm63_backup.b c0ec90529c58bc930460d50e bronze2a.9e2d5cb119e0ee278be433fc39072e bronze3a fbcfb2148a08869a bronze4a dd6f bbfabc497b bronze5a.f98b27fc9f947484ff4a66757e764f bronze6a.1488d ed51495e3c70bcc6ef57 0 fred.63a f9f7ffdd851041f4bd1af 0 volume-ScratchDisk.2c4bfdae192bdecb196399a0e9688c volume-boot-8203E4A_10E0A11-vm613cb6f0e0df0841ecbfaff86f2cc385f volume-boot-8203E4A_10E0A11-vm63a44d94e4e1034b45b1da9c75056c6bd volume-boot-8203E4A_10E0A11-vm8560dc473b76fd42a78fe26fb2c6a02e volume-boot-8203E4A_10E0A31-VM64e7f5cbaeb b2ff690daf6c806a volume-boot-8203E4A_10E0A31-vm f28a3463cb124678d0640ae5a volume-boot-8203E4A_10E0A31-vm82a64fc3ca9a91492b8a000c27d20c5b volume-vm63_backup.b c0ec90529c58bc930460d50e Start with “.” = Meta data Never ever touch these! Regular LU names PowerVC horrid LU names! Raw LU files – some Thin and some Thick Provisioned

53 Nine useful internal SSP operations
1) Rename a Thick Provisioned LU Stop the VM then lu -create to make new LU virtual disk with the new name dd if=oldLUfile of=newLUfile bs=1m lu -unmap the old LU then lu -map the new LU and restart 2) Rename a Thin   Provisioned LU nslim <oldLUfile >newLUfile 3) Backup a virtual disk LU Stop the VM dd if=LUfile of=/tmp/mylu bs=1m Restart the VM LUfile is the file from = /var/vio/SSP/SSPname/D_E_F_A_U_L_T_061310/VOL1

54 Nine useful internal SSP operations
4) Recover a previous Thick Provisioned Backup Stop the VM dd if=/tmp/mylu of=LUfile bs=1m Restart the VM 5) Recover a previous Thin   Provisioned Backup lu -unmap -lu LUname lu -remove -lu Luname lu -create -lu newLU nslim </tmp/LUbackup >newLU 6) Live Backup a point in time safe LU using a SSP snapshot -create -lu Luname File hidden from ls output dd of=/temp/backup snapshot -delete LUfile is the file from = /var/vio/SSP/SSPname/D_E_F_A_U_L_T_061310/VOL1 DEMO

55 Nine useful internal SSP operations
7) Have a look at the 1MB blocks in use within a Thin Provisioned LU nslim -t <LUfile 8) Slimming down a Thin Provisioned LU that got too Thick (AIX only) Reduce the file-systems to minimum size: chfs -a size=-1G /home (run many times) Create new max size LV dd if=/dev/zero of=/dev/LV bs=1m Remove the LV Stop the VM dd if=LUfile of=/tmp/mylu bs=1m nslim </tmp/mylu >LUfile Restart VM and remove the /tmp/mylu 9) To move a Thin Provisioned LU from a remote SSP LU to a local SSP scp $SSPlocal/VOL1/LUfile LUfile is the file from = /var/vio/SSP/SSPname/D_E_F_A_U_L_T_061310/VOL1

56 Are you keeping up to date?
mr_nmon on twitter Only used to POWER / AIX technical content, hints, tips and links 150 techie hands-on videos on YouTube at AIXpert Blog Lots of mini articles & thoughts Also:


Download ppt "Nigel Griffiths Power Systems Advanced Technology Support IBM Europe"

Similar presentations


Ads by Google