Tag line, tag line Perforce Benchmark with PAM over NFS, FCP & iSCSI Bikash R. Choudhury.

Slides:



Advertisements
Similar presentations
Circling Back To Littles Law Now that we have tools to gather information.
Advertisements

Tag line, tag line SnapDrive for Windows 6.2 Customer Technical Presentation November 2009.
Differentiated I/O services in virtualized environments
Overview of Database Administrator (DBA) Tools
Oracle9i Database Administrator: Implementation and Administration 1 Chapter 2 Overview of Database Administrator (DBA) Tools.
Page 1 Dorado 400 Series Server Club Page 2 First member of the Dorado family based on the Next Generation architecture Employs Intel 64 Xeon Dual.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Cacti Workshop Tony Roman Agenda What is Cacti? The Origins of Cacti Large Installation Considerations Automation The Current.
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
Ext3 Journaling File System “absolute consistency of the filesystem in every respect after a reboot, with no loss of existing functionality” chadd williams.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY URL:
NFS. The Sun Network File System (NFS) An implementation and a specification of a software system for accessing remote files across LANs. The implementation.
Operating Systems.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Performance Acceleration Module Tech ONTAP Live August 6 th, 2008.
Chapter 9 Overview  Reasons to monitor SQL Server  Performance Monitoring and Tuning  Tools for Monitoring SQL Server  Common Monitoring and Tuning.
VMware vCenter Server Module 4.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Perforce (Version Control Software). Perforce is an enterprise version management system in which users connect to a shared file repository. Perforce.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
Course 6425A Module 9: Implementing an Active Directory Domain Services Maintenance Plan Presentation: 55 minutes Lab: 75 minutes This module helps students.
Selecting and Implementing An Embedded Database System Presented by Jeff Webb March 2005 Article written by Michael Olson IEEE Software, 2000.
Report : Zhen Ming Wu 2008 IEEE 9th Grid Computing Conference.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.

CSC 456 Operating Systems Seminar Presentation (11/13/2012) Leon Weingard, Liang Xin The Google File System.
The Hadoop Distributed File System
Database Systems: Design, Implementation, and Management Eighth Edition Chapter 10 Database Performance Tuning and Query Optimization.
CERN IT Department CH-1211 Geneva 23 Switzerland t Experience with NetApp at CERN IT/DB Giacomo Tenaglia on behalf of Eric Grancher Ruben.
Chapter Oracle Server An Oracle Server consists of an Oracle database (stored data, control and log files.) The Server will support SQL to define.
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
Slide 1 DESIGN, IMPLEMENTATION, AND PERFORMANCE ANALYSIS OF THE ISCSI PROTOCOL FOR SCSI OVER TCP/IP By Anshul Chadda (Trebia Networks)-Speaker Ashish Palekar.
I/O Systems I/O Hardware Application I/O Interface
CSE 781 – DATABASE MANAGEMENT SYSTEMS Introduction To Oracle 10g Rajika Tandon.
Module – 4 Intelligent storage system
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
1 Oracle Architectural Components. 1-2 Objectives Listing the structures involved in connecting a user to an Oracle server Listing the stages in processing.
Copyright  Oracle Corporation, All rights reserved. 1 Oracle Architectural Components.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
Amy Apon, Pawel Wolinski, Dennis Reed Greg Amerson, Prathima Gorjala University of Arkansas Commercial Applications of High Performance Computing Massive.
Ingres Version 6.4 An Overview of the Architecture Presented by Quest Software.
Achieving Scalability, Performance and Availability on Linux with Oracle 9iR2-RAC Grant McAlister Senior Database Engineer Amazon.com Paper
INTRODUCTION TO DBS Database: a collection of data describing the activities of one or more related organizations DBMS: software designed to assist in.
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
AoE and HyperSCSI on Linux PDA Prepared by They Yu Shu.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem.
Silberschatz, Galvin and Gagne  Operating System Concepts Six Step Process to Perform DMA Transfer.
Installation of Storage Foundation for Windows High Availability 5.1 SP2 1 Daniel Schnack Principle Technical Support Engineer.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
 Introduction  Architecture NameNode, DataNodes, HDFS Client, CheckpointNode, BackupNode, Snapshots  File I/O Operations and Replica Management File.
Lock Tuning. Overview Data definition language (DDL) statements are considered harmful DDL is the language used to access and manipulate catalog or metadata.
Transactional Flash V. Prabhakaran, T. L. Rodeheffer, L. Zhou (MSR, Silicon Valley), OSDI 2008 Shimin Chen Big Data Reading Group.
E2800 Marco Deveronico All Flash or Hybrid system
Compute and Storage For the Farm at Jlab
Chapter 13: I/O Systems.
Hathi: Durable Transactions for Memory using Flash
Understanding and Improving Server Performance
Module 12: I/O Systems I/O hardware Application I/O Interface
High-performance tracing of many-core systems with LTTng
Installation and database instance essentials
Building a Database on S3
Operating System Concepts
CS703 - Advanced Operating Systems
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
CS703 - Advanced Operating Systems
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Module 12: I/O Systems I/O hardwared Application I/O Interface
Presentation transcript:

Tag line, tag line Perforce Benchmark with PAM over NFS, FCP & iSCSI Bikash R. Choudhury

© 2008 NetApp. All rights reserved. 2 Perforce Testing with Performance Acceleration Module  Goal –Compare Perforce benchmark results with and without PAM –Compare Perforce benchmark results over NFS v/s iSCSI v/s FCP  Measured performance of metadata on NetApp storage with and without PAM –Entire feature set not considered –Tests conducted under controlled environment

© 2008 NetApp. All rights reserved. 3 Executive Summary  Result –40% to 90% READ performance improvements with PAM –For READ performance, NFS slightly faster than iSCSI and FCP without PAM  Application is single threaded; iSCSi/FCP on ext3 with 4k block size is less effective than NFS with 64k blocks  Host side cache gets flushed; NFS (file access) is faster than iSCSI (blocks) served out of WAFL cache –For WRITE performance, iSCSI is 7x faster than NFS –Typical Perforce workload : 70% READ, 30% WRITE

© 2008 NetApp. All rights reserved. 4 Deltas Benchmark – READ workload 4 SYNC TestINTEGRATE Test DIR Test Time in Seconds 88% 75% % NFS Time in Seconds iSCSI Time in Seconds 91% 76% 43% Response time improvements are averages across 2, 6, and 3 tests for Dir, Integrate and Sync test respectively Time in Seconds 96%85% % FCP Time in Seconds FAS 3070 With PAM FAS 3070 without PAM

© 2008 NetApp. All rights reserved. 5 Branchsubmit Benchmark – WRITE workload  The test does "submit" of the change list action (p4 submit -c ) that holds approximately a dozen locks on the database tables for update purposes. –FCP  11 seconds – Elapsed time (time taken for each command to complete)  files/sec – Commit rate which files are written to disk) –iSCSI  14 seconds – Elapsed time  14,000 files/sec – Commit rate –NFS  50 seconds – Elapsed Time  2000 files/sec – Commit rate

© 2008 NetApp. All rights reserved. 6 Perforce Architecture

© 2008 NetApp. All rights reserved. 7 Perforce Architecture  Metadata DB – Files that compose the Perforce database  Journal Files - A record of all transactions performed by the server after the last checkpoint was created  Versioned Files (Depot) - Store the file revisions.  LOG – Contain the error output files

© 2008 NetApp. All rights reserved. 8 Test Descriptions  Deltas Benchmark (READ test) –Used NetApp Eng. IT real production data to generate the p4 database and journal files for this test –DIR, INTEGRATE, SYNC tests generate READ workload  INTEGRATE / DIR scans/parses/reads the different revisions of the code from the p4 database with and without use of locks  SYNC test downloads selected file(s) from the DEPOT to their client workspace –Odd and Even-numbered Integrates use related data  Related data already in NetApp cache (memory + PAM) for even- numbered integrates –Demonstrates great performance improvement when data is already present in NetApp cache  Branchsubmit Benchmark (WRITE Test) –Used Perforce’s Reference Dataset –Heavy WRITEs with limited READs; Single threaded  Sets exclusive locks on files for writes  Faster writes release locks for user read requests

© 2008 NetApp. All rights reserved. PAM Settings  There are three modes of operations for PAM –Default –Metadata –Low Priority  Mainly used in RANDOM read workloads  The actual application data is seldom reused in a timely manner – hence caching helps!! –The random nature of the p4 workload tends to use more metadata

© 2008 NetApp. All rights reserved. 10 NetApp Storage ControllerFAS3070 – AA Cluster OSDataONTAP Release 7.3 Cache8 GB NetApp Drive Shelves One shelf with fourteen 300Gb 15k drives on each head Drive Access Protocol ISCSI NFS – Both over 1 gigabit interface FCP PAM1 PCI Slot (16GB) NetApp Storage Specifications

© 2008 NetApp. All rights reserved. 11 Perforce Host Specifications OSRedhat Enterprise Linux 5 update 2 Kernel el5xen #1 SMP NFS mount options rw,nfsvers=3,bg,hard,rsize=65536,wsize=65536, acregmin=3600,acregmax=3600,acdirmin=7200, acdirmax=7200,proto=tcp,nointr,nolock,timeo=600, retrans=5 Initiator for iSCSISoftware initiator Memory32Gb Processor(s)8 way Intel Xeon quad Core 2.00GHz

© 2008 NetApp. All rights reserved. 12 Hardware Connections  Two filesystems mounted over NFSv3 from one FAS3070 head with 1 PAM  Two iSCSI LUNs mounted from another FAS3070 head with 1 PAM card  Two FCP LUNs mounted from same FAS3070 head with 1 PAM card  Both NFS & iSCSI connections are over single dedicated 1Gb Ethernet  The FCP target and initiator were connected back-to- back  The NFS filesystems, iSCSI and FCP LUNs were mounted to one single host running Perforce on RHEL5update2.

© 2008 NetApp. All rights reserved. Key Takeaways - PAM  PAM is needed where you want to reach high performance –With few spindles in workloads –With small blocks –Random read workload –A lot of metadata READ ops

© 2008 NetApp. All rights reserved. 14 Thank You 14 © 2008 NetApp. All rights reserved.