Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tom Hamilton – America’s Channel Database CSE

Similar presentations


Presentation on theme: "Tom Hamilton – America’s Channel Database CSE"— Presentation transcript:

1 Tom Hamilton – America’s Channel Database CSE
Oracle Architecture Tom Hamilton – America’s Channel Database CSE

2 Common Oracle Versions
Oracle 8i (no SMO support) Oracle 9i (no SMO support 3.3 or later) Oracle 10g Oracle 11g Coming attractions Oracle 12c – not released by Oracle yet

3 Oracle Components Database files Automatic Storage Management (ASM)
Real Application Cluster (RAC) Protocols Disaster Recovery

4 Oracle Database Files Storage System

5 Oracle Database Files Binaries Configuration files Datafiles
Temporary database files Redo log files Archive redo log files Cluster-related files

6 What’s a Block? The basic unit Oracle uses to manage data.
Typically 8k in size. Some data warehouses or any other database with a lot of long sequential reads will have 16k or 32k. You can have a database with mixed block sizes, but it is very rare Transcript: Data files and how they work, this is where I'm going to talk a little bit about block structure of Oracle and compare it to a WAFL block and just kind of give you a little bit of what's in a block. Typically when you see an Oracle database it can be any size block, a 4K, 8K, 16, 32, predominantly you'll see 8K blocks. If you're into a data warehouse environment you'll see 16 or 32K blocks and that's primarily for getting large chunks of data for reads. You know, you'll use those 32K blocks in a data warehouse to get better read performance. You do have the option of changing block sizes for different database components. You don't see that too often, some people do it. Where I've seen that is where people have a database used for multiple purposes. It's kind of where they made one database and they have six applications or six business units in it and one of them is pseudo data warehouse, maybe a DSS, and they'll put some of those data files in those objects as 32K block sizes. It's rare but it's out there, just be aware that someone can do it. Author’s Original Notes: Obviously this is a case where LUN misalignment would be a big problem.

7 Block Structure. It has a header that contains a database address, SCN number, checksum, etc. It has a tail with more metadata about the block. Not a good candidate for deduplication. Transcript: So let's talk about the block structure. Every block in Oracle has a header and in that header information is information about the database, the address. So in other words, if you ever try to get the row ID for a component of the database, that's where this information is stored at. The system change number, we haven't talked about that but the system change number is what Oracle tracks as changes to the database. And so that can be used as part of its recovery, as part of its restoration and how Oracle tracks and keeps consistency within the headers of the data files. It has checksum information in there and there's a couple of other things. It's 20 bytes, it's in the header. Now, the tail, think of it as a link list, if anybody has ever done a C programming, a simpler programming, it's a link list. So the tail of it also has information about the SCN, the type of block it is and what log sequence it's part of. So when you do a recovery and Oracle says Hey, for this data file I need log sequence one, two, three he knows that based on some of the control file comparisons to the tails as compared to the headers or the blocks.

8 Block Structure: Deduplication
The header should be globally unique in the whole database, meaning that first 4k WAFL block will have no duplicates. The tail is not 100% unique, but it’s highly variant, meaning that final 4k WAFL block will have very few duplicates. Transcript: We just talked about this slide, I got a little bit ahead of it. Again, the tail is not 100% unique but it's pretty close.

9 Compression & Deduplication
NetApp Deduplication and Data Compression Sample Use Cases and Space Savings Goal of Slide: Show high-level application recommendations Key Messages: Data sets that show good savings but are performance sensitive have compression enabled on the backup/archive tier only, some of these datasets have good dedupe savings so that is recommended for primary. Some samples show dedupe on primary but only compression on backup/archive. This is because the combination of both is not greater than compression savings alone. More specifics can be found in TR-3505i.a, When to Select Deduplication/Data Compression Best Practices Legend Compression Only Deduplication Only Compression & Deduplication Neither

10 Oracle Database Backup and Recovery
Archivelog mode Control files Redo logs Archive logs SCN Benefits Consequences Non-Archivelog mode

11 Oracle Database Backup and Recovery

12 Automatic Storage Manager
What is it? Features Mirroring and Striping Dynamic Storage Configuration Interoperability with non-ASM databases RAC and single instance Components Disk Groups Disks Failure groups Files Templates NetApp interoperability

13 NetApp Adds Value to Oracle ASM
Oracle ASM + NetApp Data Resilience Protect against Single Disk Failure Yes Protect against Double Disk failure No Passive Block corruption detection Active Block corruption detection Lost disk write detection Performance Stripe data across ASM Disks Balance I/O across ASM Disks Stripe data across Physical Disks Balance I/O across Physical Disks I/O prioritization Storage Utilization Free space management across physical disks Thin provisioning of ASM Disks Space efficient Cloning Data Protection Storage Snapshot based Backups Storage Snapshot based Restores

14 ASM versus FC: IO Layers
Transcript: So let's talk a little bit about the I/O stack. If I'm a regular Fibre Channel file system I go through the OS layer, the file system layer. If I'm using LVM it goes to that layer and then it gets down to the low end layer, the SCSI Fibre Channel and HBA layer. With ASM we still have to go through that low level infrastructure, but what I always call the application tier, to some degree, the volume management, the file system and OS, ASM handles that. So it does remove a couple of the layers from the stack. It's not that it improves anything, it just handles it for you and you don't have to go do some of the OS caching. In some environments ASM you can use raw devices, just different ways to use it and this is how it will work through the I/O stack.

15 Real Application Cluster
Shared database Cluster-aware storage ASM Oracle Cluster File System (OCFS) NFS Raw devices Distance between RAC nodes

16 Oracle RAC

17 Introducing Oracle dNFS
What is Direct NFS client? Collaborative solution from NetApp & Oracle NFSv3 client within Oracle RDBMS server NFS files accessed directly from Oracle Eliminates extra O/S NFS client code path Optimized NFS code path for database I/O patterns via direct I/O and asynchronous I/O support

18 Introducing Oracle dNFS
What is Direct NFS client? Eliminates the need for NFS mount options Standard NFS client implementation across all platforms supported by the Oracle Database, even Windows. No infrastructure changes are required to change from kNFS to dNFS. Direct NFS Direct NFS (dNFS) is a new feature provided in the Oracle Database 11g release. One of the primary challenges of operating system kernel NFS administration is the inconsistency in managing NFS configurations across different operating system platforms. The Direct NFS client eliminates this problem by providing a standard NFS client implementation across all platforms supported by the Oracle Database. This also makes NFS a viable solution even on platforms that don’t support NFS, for example, Windows. The Oracle Direct NFS client is implemented as an Oracle Disk Manager (ODM) interface and can be easily enabled or disabled by linking the relevant libraries. Benefits of Direct NFS Client (dNFS) The Direct NFS client in the Oracle Database 11g release overcomes the variability in NFS I/O performance by hosting the NFS client software inside the Database kernel, and is therefore independent of the operating system kernel. The NFS client embedded in an Oracle 11g Database provides the following benefits: • Stable and consistent NFS performance is observed across all operating system platforms. • The Direct NFS client is modified to better cache and manage I/O patterns typically observed in database environments, for larger and more efficient reads and writes. • The Direct NFS client allows Asynchronous Direct I/O, which is the most efficient setting for databases. This significantly improves read/write database performance by allowing I/O to continue while other requests are being submitted and processed. • Database integrity requires immediate write access to the database when requested. Operating system caching delays write access for efficiency reasons, potentially compromising data integrity during failure scenarios. The Direct NFS client uses the database caching techniques with Asynchronous Direct I/O to make sure the data writes occur expeditiously, thus reducing data integrity risks. • The NFS client manages load balancing and high availability by incorporating these features directly in the Direct NFS client, rather than depending on the operating system kernel NFS clients. This greatly simplifies network setup in high availability environments and reduces dependence on network administrators. This eliminates the need to setup network subnets, and bonded ports such as Link Aggregation Control Protocol (LACP) bonding. • The Direct NFS client allows up to four parallel network paths/ports to be used for I/O between the database server and the NAS storage system. For efficiency and performance, these are managed and load-balanced by the Direct NFS client and do not depend on the operating system. • dNFS overcomes operating system write-locking, which can be inadequate in some operating systems and can cause I/O performance bottlenecks in others. • Database server CPU and memory usage are reduced by eliminating the overhead of copying data to and from the operating system memory cache to the database System Global Area (SGA).

19 dNFS Optimizes Oracle I/O Traffic
Traditional NFS I/O Direct NFS I/O Database dNFS 1 Extra layers FS layer NFS client TCP / IP layer Driver + NIC HW TCP / IP layer Driver + NIC HW 2 Here we see the Oracle Database running on top of an operating system. When the database generates I/O requests to the storage, the requests are reads and writes that have to get to the storage and back through the underlying operating system and the network. Traditionally, the operating system passes those requests through multiple layers that are not just designed for Oracle but are general purpose to support a wide variety of applications. What can happen with the traditional approach is that when Oracle is pushing large amounts of database traffic through these different layers in the operating system, a lot of things can (and used to) go wrong with both performance and reliability. Examples of historical problems for NFS database I/O: Linux kernel memory depletion, double-caching (OS and DB), OS file descriptor resource pressure for databases with thousands of data files. Direct NFS eliminates all of these kinds of problems. Direct NFS streamlines the end-to-end path between the database and the storage. By design, dNFS is automatically asynchronous, reliable, and optimal for Oracle I/O. Oracle took just the essential functionality from the traditional NFS client and rewrote it from scratch and embedded it in the database. It’s a classic example of the “less-is-more” approach: There is less code because the Oracle NFS client uses only the minimal subset of the NFS protocol and needs no caching, locking, etc. There are fewer layers because Oracle bypasses the O/S file system layer, and the NFS client layer is collapsed into the database. Less code and fewer layers give you more reliability and better performance. The TCP/IP  + NIC driver layers that dNFS relies on are super performant and robust because during the 90’s the Internet and worldwide web pushed the industry to harden those layers as much as possible.  With database I/O over NFS certified in 1997, Oracle pushed the operating systems to harden their FS + NFS layers too but that has improved more slowly than the underlying networking layers. 2 Not optimized for Oracle 1 OS 3 3 Extra network traffic Storage 19 19

20 Oracle dNFS Innovation
dNFS is scalable, reliable, & easy to use! Scales across 4 separate network paths between DB host & NFS server Load balances across available paths Scales linearly with number of paths High Availability across paths Tested with NetApp VIF technology No configuring O/S LACP bonding

21 Improved Scalability with dNFS
Performs on par with blocks protocols Higher concurrent access to NFS server KNFS is used as the baseline. iSCSI shows a throughput increase of 12%, Fibre Channel 41%, and dNFS 39%, very close to Fibre Channel.

22 Protocols FCP/FCoE iSCSI Native NFS Direct NFS (DNFS) DO NOT USE CIFS

23 Performance Considerations
FCP/FCoE iSCSI Native NFS Direct NFS (DNFS) Which one do you choose? The customer is always right Current infrastructure Expertise level Requirements DO NOT GET IN A PROTOCOL WAR!

24 Performance Considerations – TR3932

25 Performance Considerations – TR3932

26 Oracle Disaster Recovery Methods: Data Guard
Data availability, data protection and disaster recovery solution What: Replicates Oracle databases from one data center to another Ability to perform backups from the standby database instead of the production database Both physical and logical versions Image from Oracle Corp

27


Download ppt "Tom Hamilton – America’s Channel Database CSE"

Similar presentations


Ads by Google