6 What’s a Block? The basic unit Oracle uses to manage data. Typically 8k in size.Some data warehouses or any other database with a lot of long sequential reads will have 16k or 32k.You can have a database with mixed block sizes, but it is very rareTranscript:Data files and how they work, this is where I'm going to talk a little bit about block structure of Oracle and compare it to a WAFL block and just kind of give you a little bit of what's in a block. Typically when you see an Oracle database it can be any size block, a 4K, 8K, 16, 32, predominantly you'll see 8K blocks. If you're into a data warehouse environment you'll see 16 or 32K blocks and that's primarily for getting large chunks of data for reads. You know, you'll use those 32K blocks in a data warehouse to get better read performance. You do have the option of changing block sizes for different database components. You don't see that too often, some people do it. Where I've seen that is where people have a database used for multiple purposes. It's kind of where they made one database and they have six applications or six business units in it and one of them is pseudo data warehouse, maybe a DSS, and they'll put some of those data files in those objects as 32K block sizes. It's rare but it's out there, just be aware that someone can do it.Author’s Original Notes:Obviously this is a case where LUN misalignment would be a big problem.
7 Block Structure.It has a header that contains a database address, SCN number, checksum, etc.It has a tail with more metadata about the block.Not a good candidate for deduplication.Transcript:So let's talk about the block structure. Every block in Oracle has a header and in that header information is information about the database, the address. So in other words, if you ever try to get the row ID for a component of the database, that's where this information is stored at. The system change number, we haven't talked about that but the system change number is what Oracle tracks as changes to the database. And so that can be used as part of its recovery, as part of its restoration and how Oracle tracks and keeps consistency within the headers of the data files. It has checksum information in there and there's a couple of other things. It's 20 bytes, it's in the header. Now, the tail, think of it as a link list, if anybody has ever done a C programming, a simpler programming, it's a link list. So the tail of it also has information about the SCN, the type of block it is and what log sequence it's part of. So when you do a recovery and Oracle says Hey, for this data file I need log sequence one, two, three he knows that based on some of the control file comparisons to the tails as compared to the headers or the blocks.
8 Block Structure: Deduplication The header should be globally unique in the whole database, meaning that first 4k WAFL block will have no duplicates.The tail is not 100% unique, but it’s highly variant, meaning that final 4k WAFL block will have very few duplicates.Transcript:We just talked about this slide, I got a little bit ahead of it. Again, the tail is not 100% unique but it's pretty close.
9 Compression & Deduplication NetApp Deduplication and Data Compression Sample Use Cases and Space SavingsGoal of Slide:Show high-level application recommendationsKey Messages:Data sets that show good savings but are performance sensitive have compression enabled on the backup/archive tier only, some of these datasets have good dedupe savings so that is recommended for primary.Some samples show dedupe on primary but only compression on backup/archive. This is because the combination of both is not greater than compression savings alone.More specifics can be found in TR-3505i.a, When to Select Deduplication/Data Compression Best PracticesLegendCompression OnlyDeduplication OnlyCompression & DeduplicationNeither
12 Automatic Storage Manager What is it?FeaturesMirroring and StripingDynamic Storage ConfigurationInteroperability with non-ASM databasesRAC and single instanceComponentsDisk GroupsDisksFailure groupsFilesTemplatesNetApp interoperability
13 NetApp Adds Value to Oracle ASM Oracle ASM + NetAppData ResilienceProtect against Single Disk FailureYesProtect against Double Disk failureNoPassive Block corruption detectionActive Block corruption detectionLost disk write detectionPerformanceStripe data across ASM DisksBalance I/O across ASM DisksStripe data across Physical DisksBalance I/O across Physical DisksI/O prioritizationStorage UtilizationFree space management across physical disksThin provisioning of ASM DisksSpace efficient CloningData ProtectionStorage Snapshot based BackupsStorage Snapshot based Restores
14 ASM versus FC: IO Layers Transcript:So let's talk a little bit about the I/O stack. If I'm a regular Fibre Channel file system I go through the OS layer, the file system layer. If I'm using LVM it goes to that layer and then it gets down to the low end layer, the SCSI Fibre Channel and HBA layer. With ASM we still have to go through that low level infrastructure, but what I always call the application tier, to some degree, the volume management, the file system and OS, ASM handles that. So it does remove a couple of the layers from the stack. It's not that it improves anything, it just handles it for you and you don't have to go do some of the OS caching. In some environments ASM you can use raw devices, just different ways to use it and this is how it will work through the I/O stack.
15 Real Application Cluster Shared databaseCluster-aware storageASMOracle Cluster File System (OCFS)NFSRaw devicesDistance between RAC nodes
17 Introducing Oracle dNFS What is Direct NFS client?Collaborative solution from NetApp & OracleNFSv3 client within Oracle RDBMS serverNFS files accessed directly from OracleEliminates extra O/S NFS client code pathOptimized NFS code path for database I/O patterns via direct I/O and asynchronous I/O support
18 Introducing Oracle dNFS What is Direct NFS client?Eliminates the need for NFS mount optionsStandard NFS client implementation across all platforms supported by the Oracle Database, even Windows.No infrastructure changes are required to change from kNFS to dNFS.Direct NFSDirect NFS (dNFS) is a new feature provided in the Oracle Database 11g release. One of the primary challenges of operating system kernel NFS administration is the inconsistency in managing NFS configurations across different operating system platforms. The Direct NFS client eliminates this problem by providing a standard NFS client implementation across all platforms supported by the Oracle Database. This also makes NFS a viable solution even on platforms that don’t support NFS, for example, Windows. The Oracle Direct NFS client is implemented as an Oracle Disk Manager (ODM) interface and can be easily enabled or disabled by linking the relevant libraries.Benefits of Direct NFS Client (dNFS)The Direct NFS client in the Oracle Database 11g release overcomes the variability in NFS I/O performance by hosting the NFS client software inside the Database kernel, and is therefore independent of the operating system kernel.The NFS client embedded in an Oracle 11g Database provides the following benefits:• Stable and consistent NFS performance is observed across all operating system platforms.• The Direct NFS client is modified to better cache and manage I/O patterns typically observed in database environments, for larger and more efficient reads and writes.• The Direct NFS client allows Asynchronous Direct I/O, which is the most efficient setting for databases. This significantly improves read/write database performance by allowing I/O to continue while other requests are being submitted and processed.• Database integrity requires immediate write access to the database when requested. Operating system caching delays write access for efficiency reasons, potentially compromising data integrity during failure scenarios. The Direct NFS client uses the database caching techniques with Asynchronous Direct I/O to make sure the data writes occur expeditiously, thus reducing data integrity risks.• The NFS client manages load balancing and high availability by incorporating these features directly in the Direct NFS client, rather than depending on the operating system kernel NFS clients. This greatly simplifies network setup in high availability environments and reduces dependence on network administrators. This eliminates the need to setup network subnets, and bonded ports such as Link Aggregation Control Protocol (LACP) bonding.• The Direct NFS client allows up to four parallel network paths/ports to be used for I/O between the database server and the NAS storage system. For efficiency and performance, these are managed and load-balanced by the Direct NFS client and do not depend on the operating system.• dNFS overcomes operating system write-locking, which can be inadequate in some operating systems and can cause I/O performance bottlenecks in others.• Database server CPU and memory usage are reduced by eliminating the overhead of copying data to and from the operating system memory cache to the database System Global Area (SGA).
19 dNFS Optimizes Oracle I/O Traffic Traditional NFS I/ODirect NFS I/ODatabasedNFS1Extra layersFS layerNFS clientTCP / IP layerDriver + NIC HWTCP / IP layerDriver + NIC HW2Here we see the Oracle Database running on top of an operating system. When the database generates I/O requests to the storage, the requests are reads and writes that have to get to the storage and back through the underlying operating system and the network.Traditionally, the operating system passes those requests through multiple layers that are not just designed for Oracle but are general purpose to support a wide variety of applications.What can happen with the traditional approach is that when Oracle is pushing large amounts of database traffic through these different layers in the operating system, a lot of things can (and used to) go wrong with both performance and reliability.Examples of historical problems for NFS database I/O: Linux kernel memory depletion, double-caching (OS and DB), OS file descriptor resource pressure for databases with thousands of data files. Direct NFS eliminates all of these kinds of problems.Direct NFS streamlines the end-to-end path between the database and the storage. By design, dNFS is automatically asynchronous, reliable, and optimal for Oracle I/O. Oracle took just the essential functionality from the traditional NFS client and rewrote it from scratch and embedded it in the database.It’s a classic example of the “less-is-more” approach: There is less code because the Oracle NFS client uses only the minimal subset of the NFS protocol and needs no caching, locking, etc. There are fewer layers because Oracle bypasses the O/S file system layer, and the NFS client layer is collapsed into the database. Less code and fewer layers give you more reliability and better performance.The TCP/IP + NIC driver layers that dNFS relies on are super performant and robust because during the 90’s the Internet and worldwide web pushed the industry to harden those layers as much as possible. With database I/O over NFS certified in 1997, Oracle pushed the operating systems to harden their FS + NFS layers too but that has improved more slowly than the underlying networking layers.2Not optimized for Oracle1OS33Extra network trafficStorage1919
20 Oracle dNFS Innovation dNFS is scalable, reliable, & easy to use!Scales across 4 separate network paths between DB host & NFS serverLoad balances across available pathsScales linearly with number of pathsHigh Availability across pathsTested with NetApp VIF technologyNo configuring O/S LACP bonding
21 Improved Scalability with dNFS Performs on par with blocks protocolsHigher concurrent access to NFS serverKNFS is used as the baseline. iSCSI shows a throughput increase of 12%, Fibre Channel 41%, and dNFS 39%, very close to Fibre Channel.
22 ProtocolsFCP/FCoEiSCSINative NFSDirect NFS (DNFS)DO NOT USE CIFS
23 Performance Considerations FCP/FCoEiSCSINative NFSDirect NFS (DNFS)Which one do you choose?The customer is always rightCurrent infrastructureExpertise levelRequirementsDO NOT GET IN A PROTOCOL WAR!
26 Oracle Disaster Recovery Methods: Data Guard Data availability, data protection and disaster recovery solutionWhat:Replicates Oracle databases from one data center to anotherAbility to perform backups from the standby database instead of the production databaseBoth physical and logical versionsImage from Oracle Corp
Your consent to our cookies if you continue to use this website.