Download presentation
2
Unleashing Extreme Performance and Scalability
On Oracle Exalogic Elastic Cloud Gavin Parish Product Manager Oracle Exalogic Sergey Linetskiy Consulting Member of Technical Staff Oracle Public Cloud Sandeep Rohilla Solutions Architect Agilent Technologies October 02, 2014 Copyright © 2014, Oracle and/or its affiliates. All rights reserved. |
4
Program Agenda 1 Exalogic 101 Enhancements OOTB Overview Performance Benchmarks Results Enabling Exalogic Enhancements Tuning for Maximised Performance in the Real world – Agilent Case Study 2 3 4 5
5
Exalogic: Hardware and Software Working Together
Exalogic Elastic Cloud Software Cloud Management System Software App & FMW Enhancements x86 Compute Nodes Storage InfiniBand Switches InfiniBand HCAs Infrastructure-as-a-Service Platform-as-a-Service Exabus, Oracle Traffic Director Firmware Lifecycle tooling Enhancements to WebLogic, Coherence, JVM, EBS, PeopleSoft, ATG, etc
6
Exalogic System Hardware Overview
Fast. Easy. Open. Compute I/O Fabric Storage 2 socket, 12-core, 2.7 GHz Intel Xeon processors 256 GB of 1600MHz DRAM (2) 400 GB SSDs Dual-port QDR InfiniBand HCA (PCIe) Between 2-4 InfiniBand Gateway Switches (32) QDR InfiniBand ports (8) 10GbE ports for datacenter connectivity 40 Gb/sec internal I/O backplane Enterprise-class, integrated Network Attached Storage 80 TB SAS disk, 6.4 TB read cache, 292 GB write cache Clones, snapshots, remote replication
7
Exalogic Elastic Cloud System Software
Enterprise Manager Middleware and Business Applications WebLogic Coherence Tuxedo Traffic Director Exalogic Control Exabus Integration Exabus Integration Exabus Integration Exabus Integration Oracle Linux Guest OS Physical Oracle Linux/Solaris Oracle VM 3 for Exalogic Exabus Exalogic Elastic Cloud Hardware = Exalogic Elastic Cloud Software
8
Exabus is at the Center of Exalogic’s System Software
Exabus is Exalogic’s High-speed I/O Backplane Comprised of Hardware + Software Traditional IPoIB SDP Native IB Applications Middleware Applications Middleware Applications Middleware Applications Middleware User Exabus Sockets API Exabus Sockets API Exabus Java/C++ APIs TCP IP IP over IB IB Core HCA Driver Exalogic Kernel TCP IP NIC Driver SDP IB Core HCA Driver Ethernet NIC IB HCA IB HCA IB HCA Hardware High-speed IB fabric, no application change required Bypass TCP/IP layer to get even higher performance, minimal app changes Highest performance, native IB support
9
Seamless, High-performance SR-IOV in Exalogic
Traditional IO SRIOV in Exalogic Guest VM Guest VM Guest VM Exalogic HCA Guest VM Virtual Switch Device Driver Virtual Function Physical Function OVM Server Physical I/O Port 1 2 3 Device Driver Device Driver Device Driver Virtual Switch Hypervisor Device Driver HCA Physical I/O Port Virtual NICs in guest VMs “connect” to Ethernet ports in a vSwitch that runs in hypervisor Significant I/O performance degradation – all packets cause a guest/hypervisor context switch, and usually multiple buffer copies I/O performance is at par with Exalogic Physical Traffic flows directly between guest VM and physical HCA Complex high-performance technology made readily consumable
10
Exalogic Exabus v2.0 vs v1.1 – Continued Improvement
Exabus v2.0 improves performance across more use cases than previous. Test system Exalogic Eighth Rack “MWM Enterprise” Benchmark Is a system-level macro-benchmark that emulates a shopping-cart style web interaction and it stresses all aspects of a middleware system, specifically CPU, memory and network. 9X
11
WebLogic Optimizations for Exalogic – Re-cap
Socket Direct Protocol Optimizations SDP over InfiniBand for JDBC SDP over InfiniBand for Replication network channels Work Manager and I/O Optimizations Scatter/gather I/O for reduced buffer copies Parallel muxer using Java NIO for increased throughput Multi-core aware Thread Pool Increments for quick load ramp-up Cluster optimizations Enhanced session replication - multi-channel/socket one-way RMI plus lazy de-serialization Further Sockets Direct Protocol Optimization SDP Inbound HTTP (routed from OTD) Messaging Optimization JMS Lockless Request Manager Web Optimization JSP Factory Caching HTTP Client Request Thread Pooling Operational Optimization Fast Cluster Service Recovery (fast server death detection) WebLogic Server 11gR1 PS3 (10.3.4) – January 2011
12
Message Compression Further Enhances Persistent JMS Performance 5X
Messages/Second Application Application I/O traffic optimizations increase throughput: Leverage abundant compute resources to optimize I/O intensive processing Server side message compression Messages persisted in compressed format 5X JMS JMS ZFS Storage ZFS Storage No Compression Compression
13
Cooperative Memory Management - WebLogic 12.1.3
Threading JMX Enables greater density of deployments within memory constraints Lowest layer computes “memory pressure” in JVM (0 to 10) Lowest layer updates JMX MBean accessible to WLS components WLS components reduce memory consumption under “memory pressure” JDBC Parsers ResourcePressureMBean Analyze memory state, compute “memory pressure”
14
JMS Replicated Memory Store – WebLogic 12.1.3
WebLogic Dynamic Cluster RAM Machine 1 WebLogic 1 JMS Server store queue … WebLogic N Machine N Daemon Cluster RS Daemon RDMA over Infiniband WebLogic Messaging Services (JMS) can use Replicated Stores as a high performance alternative to existing File and JDBC storage options. A Replicated Store stores data in local Exalogic node memory and replicates it to memory on a second node providing high availability with no single point of failure Yields linearly scalable performance. CLIENT MACHINE 1 JMS Sender/Receivers working with distributed Queues CLIENT MACHINE N JMS Sender/Receivers working with distributed Queues
15
JMS Replicated Store - WebLogic 12.1.3 on Exalogic - Scaling
14x boost over disk storage Linear Scalability Higher relative performance possible Test system Exalogic Quarter Rack Performance, Density, Scalability “JPERF JMS” Benchmark Micro-benchmark that emulates real messaging application using MDBs (message driven beans). Stresses the WebLogic persistent store for JMS. Adjusted client JVMs to maximize throughput.
16
Enabling WebLogic Enhancements for Exalogic pt1
Auto-enables: Scatter/Gather I/O Java NIO Muxer Multi-core aware Thread Pool incrementer Lockless Request Manager JSP Factory Caching HTTP Client Request Thread Pooling Toplink auto-tuning (12.1.3)
17
Enabling WebLogic Enhancements for Exalogic pt2
Enabling JDBC Store Exalogic Optimizations Select Oracle Piggyback Commit Enabled in the JDBC Persistent Store Service. Session Replication Create Replication Channel for each server, specifying Port Range and enabling SDP Customise cluster settings, enabling One Way RMI and Lazy Replication JDBC over SDP (also Ensure DB Server is configured for SDP) Add JVM arg -Doracle.net.SDP=true Use SDP in Oracle DB data-source URLs rather than TCP In the Data SourceConfigurationOracle properties, select “Oracle Enable JavaNet Fastpath” (reduces data copies in the driver) JMS Replicated Store Create Replicated Store under ServicesPersistent Stores
18
Enabling Exalogic Enhancements - Exabus
Leveraging SDP for Exalogic to Exadata Communications Exabus communications between Exalogic and Exadata can provide up to 960Gb/s throughput, as opposed to traditional 1Gb/s or more modern 10Gb/s. Enabling it is as simple as updating the data source: DESCRIPTION= (ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES) (ADDRESS=(PROTOCOL=tcp)(HOST=xd01client01-ibvip)(PORT=1521)) (ADDRESS=(PROTOCOL=tcp)(HOST=xd01client02-ibvip)(PORT=1521))) (CONNECT_DATA=(SERVICE_NAME=dbservice))) Take the configuration one step further to gain better performance by utilizing SDP. Enable SDP for further performance gains: DESCRIPTION= (ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES) (ADDRESS=(PROTOCOL=sdp)(HOST=xd01client01-ibvip)(PORT=1521)) (ADDRESS=(PROTOCOL=sdp)(HOST=xd01client02-ibvip)(PORT=1521))) (CONNECT_DATA=(SERVICE_NAME=dbservice)))
19
Coherence Optimizations for Exalogic – Re-cap
Exabus Latency Optimizations RDMA with kernel bypass MessageBus utilizes JVM IB API Scalable Networking Messagebus per service Simplified deployments Elastic Data Optimizations Improved heuristics for optimized Exalogic SSD Cluster Rebalancing Increases Availability
20
Coherence Elastic Data
Greatly increase size of data-set that can be held in a cluster Or reduce number of host machines required Manage memory more efficiently Transparently overflow from RAM to SSD Negligible performance impact when overflowing from RAM to SSD Ideal when large Solid State Disks (SSDs) are locally accessible Elastic Data RAM SSD
21
Coherence MessageBus Performance on Exabus
Typically run 4-8 Coherence JVMs per Exalogic compute node Reduces latencies Used for all data transfers Addresses a common challenge for typical Coherence clusters Existing UDP protocol still used for cluster management Coherence performance is usually very sensitive to network latency Pluggable high performance messaging implementation for Exabus Alloc SEND RDMA WRITE RDMA READ JAVA RECEIPT End to End Latency (50µs) Write Latency (~5µs) Sender (“Client”) Coherence Services TCP Message Bus SDP RDMA Receiver (“Server”) cache.get() latency of ~100µs (vs ~350µs for on normal systems)
22
Coherence Benchmark Performance
Coherence on Exalogic vs. COTS (10gE): Up to 4 times Transactions/Sec Using Performance CPU Frequency Scaling Governor Test System Exalogic Eighth Rack 6 Coherence Cache Servers on each compute node Coherence / Java Hotspot1.7 “Coherence Bench” Benchmark Sends requests from Coherence tcmp clients to Coherence cache servers. It stresses the Coherence layer and the network protocol layer (Exabus). 4X 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22 22
23
Enabling Coherence MessageBus Configuration for Exalogic
Typically you will get the best performance with the default auto-selected transport protocol for your platform. IMB (Infiniband Message Bus) is enabled by default when running on Exalogic Linux. There are 2 ways to change the protocol if needed: Edit the cluster's operational override file (tangosol-coherence-override.xml) Java command line argument: -Dtangosol.coherence.transport.reliable=protocol Other transport protocols available: tmb: TCP MessageBus sdmb: SDP message bus. SDMB is only available on Exalogic. sdmbs: SDP message bus with SSL. SDMBS is only available on Exalogic. imb: Infiniband MessageBus (default for Exalogic Linux) datagram: legacy UDP
24
Tuxedo Optimizations for Exalogic – Re-cap
Better Performance for Intra-domain communication Leverage Exabus Reduces buffer copies Eliminates BRIDGE as inter-machine bottleneck Efficient algorithm for lock management Sockets Direct Protocol – SDP Faster network communication at lower CPU cost Dynamic Spincount Shared APPDIR across all nodes Ease of application maintenance Zero Copy using Shared Memory Queues
25
Tuxedo 12.1.3 on Exalogic Elastic Cloud
4 Current Architecture with GW Between Nodes 1. Client sends msg to the local GW through IPC 2. GW sends msg through network to remote GW 3. GW sends the msg to the server through local IPC 4. Server retrieves msg from local IPC Architecture on Exalogic Eliminating GW 1. Client writes msg directly to remote server’s queue using RDMA 2. Server retrieves msg from local queue FASTER DELIVERY, HIGHER THROUGPUT Single Connection GW GW TCP over Ethernet 2 3 Tuxedo Tuxedo 1 2 RDMA over InfiniBand Tuxedo Tuxedo 25
26
Tuxedo 12.1.3 on Exalogic Elastic Cloud
Improved Availability and Scalability with Exadata Dynamic Database configuration FAN UP, DOWN, and Real time Load Advisory events to move database connection to an appropriate instance Simplified configuration High availability for Tuxedo applications – transactions are routed to available RAC instance automatically Real time load balancing across RAC instances
27
Tuxedo 12.1.3 on Exalogic Elastic Cloud
High Performance Domains Reduced buffer copies Simplified configuration Use of Exabus/RDMA for direct communication, bypassing TCP/IP stack 4X
28
Enabling Tuxedo Configuration for Exalogic
As long as the option EECS in OPTIONS of UBBCONFIG *RESOURCES section is specified, the following features are enabled: Direct Cross Node Communication Leveraging RDMA Direct Cross Domain Communication Leveraging RDMA Self-Tuning Lock Mechanism SDP Support Use of Shared Memory for Inter Process Communication Read-only Optimization for XA XA Affinity Common XID Single Group Multiple Branches (SGMB) FAN Integration
29
Oracle Traffic Director – Overview
Built-in Application Delivery Controller for Load Balancing over InfiniBand High Availability Quality of Service DATACENTER SERVICE NETWORK Built-in HA on engineered systems Back-end Offline Health check Assign back-end servers as Backup Dynamic reconfiguration Request, content-based routing Request rate acceleration Request rate & connection limiting Quality of service tuning Oracle Traffic Director WAF High Performance Security Built-in HTTP, TCP load balancing Native Exabus integration SSL & TLS offloading Content caching & HTTP Compression Throughput: 10k HTTP, 5k HTTPS req/sec on 2 vCPU with 2ms response time. HTTP Reverse proxy Support for SSL 3.0 and TLS 1.0 Web-application firewall Application
30
Enabling OTD to WebLogic Enhancements
In WebLogic tier Create HTTP Channel for each server Specific new port (eg. 7103) In Advanced Properties select SDP Enabled Re-start each server In Oracle Traffic Director For each Origin Server (WLS instance) in Origin Server Pool Specify Address Family field as “inet-sdp” Change Listen Port to the new value (eg. 7103)
31
EECS Optimization Documentation
Exalogic Enterprise Deployment Guide Optimization Administering Oracle WebLogic Server on Exalogic (12.1.3) Tuning Performance of Oracle WebLogic Server 12c (12.1.3) Optimizing Persistence Applications for Oracle Exalogic using Toplink Oracle Tuxedo/Oracle Exalogic Users Guide
32
Customer Case Study: Agilent Technologies Sandeep Rohilla, Solutions Architect
ATG, Tai. Exact bullets. For metrics. Ratios. Gain %% etc. Request hardware from slide 12.
33
Agilent Technologies – Fact Sheet
Agilent is a leader in life sciences, diagnostics and applied markets. The company provides laboratories worldwide with instruments, services, consumables, applications and expertise, enabling customers to gain the insights they seek. Key Markets: Chemical and Energy Pharma & Biotech Environmental and Forensics Food Diagnostics and Clinical Research US $3.9 Billion Revenue 11,000 Employees
34
Project Methodology Go-Live Date: 22-Feb-2014
URL: Implementation Time: 6.5 Months eCommerce Platform MS Commerce Server 2007 Search MS SharePoint 2007 Database MS SQL 2005 Application Server .NET 3.0 Analyze As Is Collaborate Launch Transform eCommerce Platform Oracle ATG Web Commerce 10.1 Search Oracle Endeca Search Database Oracle 11g Application Server Oracle WebLogic Server
35
Site Performance Results
36
Product Versions & Tools
Products and Technologies Oracle ATG Web Commerce Oracle ATG Web Commerce Merchandising Oracle ATG Content Administration Oracle Endeca MDEX Engine 6.3.0 Oracle Endeca Platform Services 6.1.3 Oracle Endeca Frameworks with EM 3.1.0 Oracle Endeca CAS 3.0.2 Oracle Endeca Experience Manager 3.1.0 Oracle Exalogic Hardware Oracle WebLogic Server Oracle Traffic Director & Oracle http server Oracle 11g (Database) Oracle EMOC for monitoring HTML5/CSS3/Bootstrap/jQuery/JSON Tools Adobe Dreamweaver, Photoshop & Flex SDK Apache Eclipse, SQL Developer Hudson, SVN OWASP Zed Attack Proxy (ZAP) PMD, Checkstyle, SONAR Selenium Web driver Soap UI JMeter Jupiter - Eclipse plugin for code review HP Quality Center JIRA & MPP Fiddler & WireShark Open SSL & SSL Tap JRockit with mission control VisualVM & Dynatrace TDA, GC Viewer & Samurai
37
Questions
38
Join the Exalogic Community
Twitter Facebook Exalogic Blog LinkedIn YouTube Visit us at: oracle.com/exalogic
39
Join Us for Other Exalogic Sessions
Session Title Date Location Oracle Exalogic Roadmap: Hardware, Software, Platform News Monday, 2:45 p.m. - 3:30 p.m. Moscone South Room 270 VMware vs Oracle Exalogic: A Fair and Balanced Comparison Monday, 5:15 p.m. - 6:00 p.m. Moscone South Room 304 How to Deliver Oracle Fusion Middleware-as-a-Service to Your Organization Monday, 4:00 p.m. - 4:45 p.m. Moscone South Room 262 Zero Downtime for Oracle E-Business Suite on Oracle Exalogic Tuesday, 3:45 p.m. - 4:30 p.m. Customer Innovations on Oracle Engineered Systems Wednesday, 11:30 a.m. - 12:15 p.m. Building Application Private Clouds with Engineered Systems Wednesday, 11:30 a.m. – 12:15 pm Moscone West Building a Scalable Private Cloud with Oracle Exalogic, Nimbula, and Openstack Wednesday, 4:45 p.m. - 5:30 p.m. Exalogic Innovation Awards Wednesday, 5:00 p.m. - 5:45 p.m. Yerba Buena Lam Research Center PeopleSoft High Performance & MAA Guidelines on Oracle Exalogic & Oracle Exadata Thursday, 10:45 a.m. - 11:30 a.m. Marriott Marquis Salon 14/15 Increasing Your Application Availability with Oracle Traffic Director Thursday, 12:00 p.m. - 12:45 p.m. Thriving with Oracle Exalogic: The Definitive Owner/Operator Lifecycle Handbook Moscone South room 304 Oracle Exalogic, Oracle Solaris & Oracle SuperCluster: More Versatile Than Ever Thursday, 1:15 p.m. - 2:00 p.m. Unleashing Extreme Performance and Scalability on Oracle Exalogic Elastic Cloud Thursday, 2:30 p.m. - 3:15 p.m. Moscone South Room 200 Extreme SOA: Secure, Fast, Scalable and Reliable: Delivered on Oracle Exalogic ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✔
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.