We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byJefferson Larkey
Modified about 1 year ago
Tag line, tag line Perforce Benchmark with PAM over NFS, FCP & iSCSI Bikash R. Choudhury
© 2008 NetApp. All rights reserved. 2 Perforce Testing with Performance Acceleration Module Goal –Compare Perforce benchmark results with and without PAM –Compare Perforce benchmark results over NFS v/s iSCSI v/s FCP Measured performance of metadata on NetApp storage with and without PAM –Entire feature set not considered –Tests conducted under controlled environment
© 2008 NetApp. All rights reserved. 3 Executive Summary Result –40% to 90% READ performance improvements with PAM –For READ performance, NFS slightly faster than iSCSI and FCP without PAM Application is single threaded; iSCSi/FCP on ext3 with 4k block size is less effective than NFS with 64k blocks Host side cache gets flushed; NFS (file access) is faster than iSCSI (blocks) served out of WAFL cache –For WRITE performance, iSCSI is 7x faster than NFS –Typical Perforce workload : 70% READ, 30% WRITE
© 2008 NetApp. All rights reserved. 4 Deltas Benchmark – READ workload 4 SYNC TestINTEGRATE Test DIR Test Time in Seconds 88% 75% % NFS Time in Seconds iSCSI Time in Seconds 91% 76% 43% Response time improvements are averages across 2, 6, and 3 tests for Dir, Integrate and Sync test respectively Time in Seconds 96%85% % FCP Time in Seconds FAS 3070 With PAM FAS 3070 without PAM
© 2008 NetApp. All rights reserved. 5 Branchsubmit Benchmark – WRITE workload The test does "submit" of the change list action (p4 submit -c ) that holds approximately a dozen locks on the database tables for update purposes. –FCP 11 seconds – Elapsed time (time taken for each command to complete) files/sec – Commit rate which files are written to disk) –iSCSI 14 seconds – Elapsed time 14,000 files/sec – Commit rate –NFS 50 seconds – Elapsed Time 2000 files/sec – Commit rate
© 2008 NetApp. All rights reserved. 6 Perforce Architecture
© 2008 NetApp. All rights reserved. 7 Perforce Architecture Metadata DB – Files that compose the Perforce database Journal Files - A record of all transactions performed by the server after the last checkpoint was created Versioned Files (Depot) - Store the file revisions. LOG – Contain the error output files
© 2008 NetApp. All rights reserved. 8 Test Descriptions Deltas Benchmark (READ test) –Used NetApp Eng. IT real production data to generate the p4 database and journal files for this test –DIR, INTEGRATE, SYNC tests generate READ workload INTEGRATE / DIR scans/parses/reads the different revisions of the code from the p4 database with and without use of locks SYNC test downloads selected file(s) from the DEPOT to their client workspace –Odd and Even-numbered Integrates use related data Related data already in NetApp cache (memory + PAM) for even- numbered integrates –Demonstrates great performance improvement when data is already present in NetApp cache Branchsubmit Benchmark (WRITE Test) –Used Perforce’s Reference Dataset –Heavy WRITEs with limited READs; Single threaded Sets exclusive locks on files for writes Faster writes release locks for user read requests
© 2008 NetApp. All rights reserved. PAM Settings There are three modes of operations for PAM –Default –Metadata –Low Priority Mainly used in RANDOM read workloads The actual application data is seldom reused in a timely manner – hence caching helps!! –The random nature of the p4 workload tends to use more metadata
© 2008 NetApp. All rights reserved. 10 NetApp Storage ControllerFAS3070 – AA Cluster OSDataONTAP Release 7.3 Cache8 GB NetApp Drive Shelves One shelf with fourteen 300Gb 15k drives on each head Drive Access Protocol ISCSI NFS – Both over 1 gigabit interface FCP PAM1 PCI Slot (16GB) NetApp Storage Specifications
© 2008 NetApp. All rights reserved. 11 Perforce Host Specifications OSRedhat Enterprise Linux 5 update 2 Kernel el5xen #1 SMP NFS mount options rw,nfsvers=3,bg,hard,rsize=65536,wsize=65536, acregmin=3600,acregmax=3600,acdirmin=7200, acdirmax=7200,proto=tcp,nointr,nolock,timeo=600, retrans=5 Initiator for iSCSISoftware initiator Memory32Gb Processor(s)8 way Intel Xeon quad Core 2.00GHz
© 2008 NetApp. All rights reserved. 12 Hardware Connections Two filesystems mounted over NFSv3 from one FAS3070 head with 1 PAM Two iSCSI LUNs mounted from another FAS3070 head with 1 PAM card Two FCP LUNs mounted from same FAS3070 head with 1 PAM card Both NFS & iSCSI connections are over single dedicated 1Gb Ethernet The FCP target and initiator were connected back-to- back The NFS filesystems, iSCSI and FCP LUNs were mounted to one single host running Perforce on RHEL5update2.
© 2008 NetApp. All rights reserved. Key Takeaways - PAM PAM is needed where you want to reach high performance –With few spindles in workloads –With small blocks –Random read workload –A lot of metadata READ ops
© 2008 NetApp. All rights reserved. 14 Thank You 14 © 2008 NetApp. All rights reserved.
Introduction Architecture NameNode, DataNodes, HDFS Client, CheckpointNode, BackupNode, Snapshots File I/O Operations and Replica Management File.
Performance Acceleration Module Tech ONTAP Live August 6 th, 2008.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
Achieving Scalability, Performance and Availability on Linux with Oracle 9iR2-RAC Grant McAlister Senior Database Engineer Amazon.com Paper
Differentiated I/O services in virtualized environments Tyler Harter, Salini SK & Anand Krishnamurthy 1.
Chapter 9 Overview Reasons to monitor SQL Server Performance Monitoring and Tuning Tools for Monitoring SQL Server Common Monitoring and Tuning.
The Hadoop Distributed File System Konstantin Shvachko, Hairong Kuang, Sanjay Radia, Robert Chansler Yahoo! Sunnyvale, California USA Presented by Ying.
Amy Apon, Pawel Wolinski, Dennis Reed Greg Amerson, Prathima Gorjala University of Arkansas Commercial Applications of High Performance Computing Massive.
Transactional Flash V. Prabhakaran, T. L. Rodeheffer, L. Zhou (MSR, Silicon Valley), OSDI 2008 Shimin Chen Big Data Reading Group.
Circling Back To Littles Law Now that we have tools to gather information.
Selecting and Implementing An Embedded Database System Presented by Jeff Webb March 2005 Article written by Michael Olson IEEE Software, 2000.
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
Oracle9i Database Administrator: Implementation and Administration 1 Chapter 2 Overview of Database Administrator (DBA) Tools.
CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
Week 1 Lecture 2 Overview of Database Administrator (DBA) Tools.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
1 Oracle Architectural Components. 1-2 Objectives Listing the structures involved in connecting a user to an Oracle server Listing the stages in processing.
NFS. The Sun Network File System (NFS) An implementation and a specification of a software system for accessing remote files across LANs. The implementation.
Ext3 Journaling File System “absolute consistency of the filesystem in every respect after a reboot, with no loss of existing functionality” chadd williams.
Perforce (Version Control Software). Perforce is an enterprise version management system in which users connect to a shared file repository. Perforce.
INTRODUCTION TO DBS Database: a collection of data describing the activities of one or more related organizations DBMS: software designed to assist in.
Ingres Version 6.4 An Overview of the Architecture Presented by Quest Software.
Report ： Zhen Ming Wu 2008 IEEE 9th Grid Computing Conference.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Streams Performance.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Page 1 Dorado 400 Series Server Club Page 2 First member of the Dorado family based on the Next Generation architecture Employs Intel 64 Xeon Dual.
CSC 456 Operating Systems Seminar Presentation (11/13/2012) Leon Weingard, Liang Xin The Google File System.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
AoE and HyperSCSI on Linux PDA Prepared by They Yu Shu.
Installation of Storage Foundation for Windows High Availability 5.1 SP2 1 Daniel Schnack Principle Technical Support Engineer.
CSE 781 – DATABASE MANAGEMENT SYSTEMS Introduction To Oracle 10g Rajika Tandon.
Tag line, tag line SnapDrive for Windows 6.2 Customer Technical Presentation November 2009.
Ridge Xu 13.1 Operating System Concepts Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
Silberschatz, Galvin and Gagne Operating System Concepts Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem.
Module 9: Implementing an Active Directory ® Domain Services Maintenance Plan.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations.
HADOOP DISTRIBUTED FILE SYSTEM HDFS Reliability Based on “The Hadoop Distributed File System” K. Shvachko et al., MSST 2010 Michael Tsitrin 26/05/13.
Operating Systems. What is an Operating System? Computer requires two types of software: Application software Operating systems software Used to control.
CERN IT Department CH-1211 Geneva 23 Switzerland t Experience with NetApp at CERN IT/DB Giacomo Tenaglia on behalf of Eric Grancher Ruben.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
History & Motivations –RDBMS History & Motivations (cont’d) … … Concurrent Access Handling Failures Shared Data User.
© 2010 VMware Inc. All rights reserved VMware vCenter Server Module 4.
I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Streams Performance.
I/O Systems CS 3100 I/O Hardware1. I/O Hardware Incredible variety of I/O devices Common concepts ◦Port ◦Bus (daisy chain or shared direct access) ◦Controller.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
Slide 1 DESIGN, IMPLEMENTATION, AND PERFORMANCE ANALYSIS OF THE ISCSI PROTOCOL FOR SCSI OVER TCP/IP By Anshul Chadda (Trebia Networks)-Speaker Ashish Palekar.
© 2017 SlidePlayer.com Inc. All rights reserved.