Download presentation
Published byCorinne Union Modified over 11 years ago
1
NetApp Recommendations for Oracle on AIX and JFS2
Jorge Costa SDT - EMEA
2
Agenda Recommendations for: FlexVol Layout LVM and FS Tuning SMO
AIX tuning Oracle tuning IO stack tuning TR's and additional reading
3
Best Practices The next slides contains NetApp's best practice recommendation for Oracle on AIX servers. This recommendation is in terms of performance and compatibility with the SnapManager line of products. AIX performs better with a larger number of small luns versus a single large lun; Tune queue-depths Limit AIX buffer Cache and inject freed memory into the Oracle SGA Limit SGA paging activity Spread the DG, LV and FS across every mapped LUN Tune the filesystems according to its use Enable Oracle to use the new IO options
4
Best Practices SnapManager requirements:
SMO does not take backups of the TEMP tablespaces SMO does not backup the online redologs SMO takes a backup of the datafiles, archivelogs and control-files Create dedicated flexvols for: datafiles, redologs, archive logs, control files, and temp tablespace Do not mix LUNs from differents FlexVols into the same LVM Diskgroup
5
Storage:ORACLE:FlexVol Layout
Create the Oracle FlexVols: [layout recommendations for performance and SMO compatibility] /vol/oradata (datafiles and indexes) [8-16 luns] /vol/oralog (redologs only) [2-4 luns] /vol/orarch (archived redo logs ) /vol/controlfiles (small vol for controlfiles) /vol/oratemp (temp tablespace) [4-8 luns] /vol/orabin (oracle binaries) [1-2 luns]
6
Storage:ORACLE:LVM Layout
Create the Oracle LVM DG: [layout recommendations for performance and SMO compatibility] DGoradata (datafiles and indexes) DGoralog (redologs only) DGorarch (archived redo logs ) DGcontrolfiles (small vol for controlfiles) DGoratemp (temp tablespace) DGorabin (oracle binaries)
7
Storage:ORACLE:FlexVol Options
Set the Volume Options: vol option <oraclevol> nosnap=off, nosnapdir=off, minra=off, no_atime_update=on, nvfail=off, ignore_inconsistent=off, snapmirrored=off, create_ucode=on, convert_ucode=off, maxdirsize=335462, schedsnapname=ordinal, fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off, svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off, no_i2p=off, fractional_reserve=0, extent=off, try_first=volume_grow, read_realloc=off, snapshot_clone_dependency=off * do not set fractional_reserve=0 when not using volume autosize or snap autodelete
8
Storage: FlexVol Space Management
Use Volume AutoGrow (and SnapAutodelete) But understand the impact of using Fractional Reserve first The link below contains an explanation of Fractional Reserve in human readable language
9
AIX:Tune the IO stack Set the queue_depth per device:
For large servers: chdev -l hdisk2 -a queue_depth=128 chdev -l hdisk3 -a queue_depth=128 . chdev -l hdisk16 -a queue_depth=128 For small to medium servers: chdev -l hdisk2 -a queue_depth=64 chdev -l hdisk3 -a queue_depth=64 chdev -l hdisk16 -a queue_depth=64
10
AIX:Tune the IO stack Set the queue_depth per HBA: For large servers:
chdev -l fcs0 -a num_cmd_elems=1024 chdev -l fcs1 -a num_cmd_elems=1024 For small to medium servers: chdev -l fcs0 -a num_cmd_elems=512 chdev -l fcs1 -a num_cmd_elems=512
11
AIX:Tune the IO stack ioo –L | grep aio
check asynchronous IO and FASTPATH configuration with : ioo –L | grep aio ioo –p –o aio_fsfastpath=1 (default setting)
12
AIX:Oracle SGA Prevent paging out memory pages of SGA :
(only if App+DB on same AIX host and >80% of computational pages) vmo -p -o v_pinshm=1 vmo -p -o maxpin%=[(total mem-SGA size)*100/total mem]+3 (Leave maxpin% at the default of 80% unless the SGA exceeds 77% of real memory) On oracle set LOCK_SGA=TRUE
13
AIX:LVM mkvg –S -s 32m -y <VGname> \
Create the Diskgroups: (Distribute the VolumeGroup across all the LUNs) (only use LUNs from the corresponding FlexVol) mkvg –S -s 32m -y <VGname> \ hdisk2 hdisk3 hdisk4 hdisk5\ hdisk6 hdisk7 hdisk8 hdisk9 * LUNs from FlexVol oradata in VGoradata
14
AIX:LVM Create a Logical Volume:
(by spreading the LV across all the LUNs) mklv -t jfs2 -e x -y <LVname> <VGname> <size>g
15
AIX:FS Make JFS2 options: crfs –a logname=INLINE
If you create a jfs2 filesystem on a striped (or PP spreaded) LV, use the INLINE logging option. It will avoid « hot spots » by creating the log inside the filesystem (which is striped) instead of using a unique PP stored on 1 hdisk crfs –a logname=INLINE
16
AIX:FS Use Concurrent IO:
Concurrent IO (CIO) – introduced with jfs2 in AIX 5.2 ML1 Implicit use of Direct IO No inode locking : Multiple threads can perform reads and writes on the same file at the same time. Performance achieved using CIO is comparable to raw-devices. crfs –a options=cio
17
AIX:FS Use Concurrent IO: Benefits of CIO,DIO for Oracle:
Avoid double caching: data is already cached by the SGA Faster access path to the disk reducing CPU utilization Remove the inode-lock contention, several threads can read and write the same file
18
AIX:FS:ORACLE Create the FS based on its usage case:
For Oracle datafiles: crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE -a options=cio For Oracle redologs: crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE -a agblksize=512 -a options=cio [when using CIO, IO must be aligned with the jfs2 blocksize to avoid a demoted IO (return to normal IO after a directio failure. Redo logs are always written in 512 bytes, so set agblksize=512]
19
AIX:FS:ORACLE Create the FS based on its usage case: For Archive Logs:
crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE -a options=rbrw For Control Files: crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE -a options=rw
20
AIX:FS:ORACLE Create the FS based on its usage case:
Other Filesystems(binaries/applications) crfs -v jfs2 -d <LVname> -m </mountpoint> -a logname=INLINE
21
AIX:ORACLE Increase LGWR priority: renice # –p <#LGWR PID>
Use lower values to increase the CPU scheduling priority of the LGWR Oracle process. The goal is to allow access to CPU resources when lgwr needs to perform its log write operations.
22
ORACLE Adjust init.ora: disk_asynch_io=true
(In AIX/Oracle 9i and 10g the recommended settings for the database are a single db writer process and async I/O) disk_asynch_io=true filesystemsio_options=asynch [ using asynch instead of setall, allows for buffered writes to be used on the archivelog area on 9i and 10G] db_file_multiblock_read_count=32-128 [because data transfer is bypassing the AIX buffer cache, JFS2 prefetch and write-behind can’t be used, sequential reads can be tuned by adjusting the parameter above] db_writer_processes=1 lock_sga=true
23
NETAPP: Technologies for Oracle
PAM the PAM offers a new way to optimize the performance of a NetApp storage by improving Throughput and Latency while reducing the number of disk spindlesshelves required as well as power, cooling and rack space requirements . It is a an array controller resident, intelligent 3/4 length PCIe card with 16GB of DDR2 SDRAM that is used as a read cache and is integrated with Data ONTAP via FlexScale which is software that provides various tuning options and modes of operation.
24
ORACLE:PAM1 PAM Architecture: 16 GB Read Cache Card
25
ORACLE:PAM1 Improvements in response time when using PAM
26
TR's – additional reading
The NetApp Performance Acceleration Module in File Services Workloads NetApp Performance Acceleration Module Oracle OLTP Characterization Configuring and Tuning NetApp Storage Systems for High-Performance Random-Access Workloads Information Lifecycle Management with Oracle® Database 10g™ Release 2 and NetApp SnapLock Oracle Fusion Middleware DR Solution Using NetApp Storage
27
TR's – additional reading
Simplified SAN Provisioning and Improved Space Utilization Using NetApp Provisioning Manager NetApp Storage Controllers and Fibre Channel Queue Depth Oracle 11g Release 1 Performance: Protocol Comparison on Red Hat Enterprise Linux 5 Update 1 ONTAP 7.2 and 7.3 NetApp Technology Network
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.