The Power of Platform Solutions to Transform Higher Education 100,000 Students on Banner - Exceptional Performance Presented by: Scott Howe and Seth Sites Liberty University Oct 1, 2014
Background- Intense Growth Liberty IT has been challenged to support the growth of both residential and online programs. Connie Session ID xxxx
Connie Session ID xxxx
April 2006 – Performance issues and the need for high availability drive Blackboard from Microsoft SQL server to a 4 node 10g Oracle RAC split between campuses. July 2006 – First Banner module goes live to replace AS400 for ERP. Initial system consisted of AIX on IBM p570 infrastructure. Dec 2007 – Performance and high availability requirements drive Banner migration onto dedicated 4 node RedHat RACs. 2009 – Both clusters are upgraded from Oracle 10g to Oracle 11g and a hardware refresh to increase system capacity and provide the smallest downtime window for the upgrade. Blackboard inherits the older Banner hardware. 2012 – Migration from the dell 4 node RACs to new hardware (UCS blades) and consolidated into solitary datacenter to eliminate interconnect latency. 2008-2012 - RAC licensing and hardware resources require multiple tier 0 services (dotcms, degreeworks, bbts) to be configured on nodes without any form of redundancy. Connie Session ID xxxx
2013+ Database sprawl, anticipation of growth, environment complexity, and reliance on shared storage drive evaluation of Exadata. April 2006 – Performance issues and the need for high availability drive Blackboard from Microsoft SQL server to a 4 node 10g Oracle RAC split between campuses. July 2006 – First Banner module goes live to replace AS400 for ERP. Initial system consisted of AIX on IBM p570 infrastructure. Dec 2007 – Performance and high availability requirements drive Banner migration onto dedicated 4 node RedHat RACs. 2009 – Both clusters are upgraded from Oracle 10g to Oracle 11g and a hardware refresh to increase system capacity and provide the smallest downtime window for the upgrade. Blackboard inherits the older Banner hardware. 2012 – Migration from the dell 4 node RACs to new hardware (UCS blades) and consolidated into solitary datacenter to eliminate interconnect latency. 2008-2012 - RAC licensing and hardware resources require multiple tier 0 services (dotcms, degreeworks, bbts) to be configured on nodes without any form of redundancy. 2013+ Database sprawl, anticipation of growth, environment complexity, and reliance on shared storage drive evaluation of Exadata. Connie Session ID xxxx
Problem 40+ database servers (production, stage, test, and development) Blackboard, BbTS, DegreeWorks, Banner, In house, etc Interconnect proximity and speed Nonproduction instances and snapshotting Disk I/O Inconsistency Administrative hurdles due to deviations in Database software versions Patch levels Hardware Firmware OS versions/patches Jay Session ID xxxx
The Next Step IT Needs Business Needs Capacity on demand Reduced turn around for projects needing new databases More accurate performance testing for deployments Reduction in patching efforts Support structure (one throat to choke) Business Needs Improve end user/student performance Improve Business Intelligence Office reporting capabilities Increase redundancy/high availability Jay Session ID xxxx
Exadata-Overall Goals High availability of production systems (Redundancy) Performance Manageability Ease of use Centralized administration Simplicity of the environment Jay Session ID xxxx
To persuade a skeptic… Development Instances on 1/8 rack POC-Summer 2013 To persuade a skeptic… Development Instances on 1/8 rack Scott Session ID xxxx
Proof of Concept Points of evaluation I/O throughput Hybrid Columnar Compression Business Case Tests Backups/Cloning Scott Session ID xxxx
POC Results #1 Evaluation of I/O intensive queries Differences between X2 and X3 make Exadata great for OLTP Scott Session ID xxxx
POC Results #2 Evaluation of Hybrid Columnar Compression claims. What kind of compression could we realistically expect to see? How would it impact performance? Can it be used in an OLTP setting? Scott Session ID xxxx
POC Results #3 Several business critical reports were identified as test cases. Without any Exadata specific optimization, on an instance that was imported from a dump, how would they perform? query runtime was reduced by 30% on the Exadata. Expecting further gains 1/2 vs. 1/8 rack Additional tweaking such as degrees of parallelism Scott Session ID xxxx
POC Results #4 Development practices expect production identical sandboxes. Critical to be able to provide a clone from production during the work day as quickly as possible. How fast can we perform a full backup of our ERP instance (Banner)? Scott Session ID xxxx
Full Implementation- Fall 2013 90% of all instances migrated to Exadata in under 2 months Scott Session ID xxxx
Cross Campus Redundancy
Performance-ERP Nominal load on production Banner Around 30-45 active waiting sessions Traffic comprised of application, commit, user I/O, and CPU wait Post exadata migration ~6 active waiting sessions (95% of which is just CPU) Scott Session ID xxxx
Performance-LMS In Bb implementation we’re seeing between 1-2 million calls per minute Average response times less than 10 milliseconds Average active waiting sessions went from 20-30 to 2! Scott Session ID xxxx
85% reduction in overall active and waiting sessions
Wait Classes
Wait Classes Continued
Large Job Runtimes
Data Warehouse Results Pre-Exadata 97% full 4Tb directed attached disk array 256Gb of memory 64 cores Post Migration Performance – 81% of the 123 ETL jobs running under the DBMS_SCHEDULER run faster with no tuning whatsoever. There will continue to be some tuning opportunities associated with the migration but overall queries simply run faster. Disk Space – Consumes a 1/4 of the disk space that was required on the non-Exadata platform. This is due to leveraging HCC for tables in the Archive partition.
Post Implementation Results Conversion to RAC Creation of Data Guard instances Claim validation (bored databases) Already met two unforeseen crucial business need for database resources (CAS and Liferay) Reduction in efforts required for quarterly patching cycle Performance job runtimes storage cell offloading streamed data warehouse DBAs taking fewer after hours calls Scott Session ID xxxx
Q&A
© 2014 Ellucian. All rights reserved. Thank You! Kevin Roebuck (kevin.roebuck@oracle.com) Scott Howe (jshowe@liberty.edu) Seth Sites (ssites@liberty.edu) . © 2014 Ellucian. All rights reserved.