Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Xin Wang Internet Real -Time Laboratory Internet Real -Time Laboratory Columbia University ( Joint work with Henning Schulzrinne, Dilip Kandlur, and.

Similar presentations


Presentation on theme: "1 Xin Wang Internet Real -Time Laboratory Internet Real -Time Laboratory Columbia University ( Joint work with Henning Schulzrinne, Dilip Kandlur, and."— Presentation transcript:

1 1 Xin Wang Internet Real -Time Laboratory Internet Real -Time Laboratory Columbia University ( Joint work with Henning Schulzrinne, Dilip Kandlur, and Dinesh Verma ) http://www.cs.columbia.edu/~xinwang http://www.cs.columbia.edu/~xinwang

2 2 Introduction to LDAPIntroduction to LDAP MotivationMotivation BackgroundBackground Experimental SetupExperimental Setup Test MethodologyTest Methodology Result AnalysisResult Analysis Related WorkRelated Work ConclusionConclusion Outline

3 3 Directory ServiceDirectory Service –A simplified database, primarily for high volume efficient reads; no database mechanisms to support roll-back of transactions LDAP: Lightweight Directory Access ProtocolLDAP: Lightweight Directory Access Protocol –A distributed client-server model over TCP/IP –Can access stand-alone directory servers or X.500 directories What is LDAP?

4 4 Wide use of LDAPWide use of LDAP –personnel databases for administration, tracking schedules, address translation for IP telephony, storage of network configuration, etc. Performance of LDAP?Performance of LDAP? –relatively static data, caching to improve performance –Can LDAP be used in a dynamic environment with frequent searches? Motivation

5 5 Background: LDAP Structure Tree structure: entry, attributes, valuesTree structure: entry, attributes, values Operations: add, delete, modify, compare, and search.Operations: add, delete, modify, compare, and search.

6 6 Background (cont’d.) Background (cont’d.) LDAP for SLS AdministrationLDAP for SLS Administration –A better than best effort service, e.g., int-serv, diff-serv, requires a service level specification (SLS) between the network and customer –SLS specifies type of service, user traffic constraints, quality expected, etc. May be dynamically negotiated – LDAP directory contains: SLS, policy rules, network provisioning information

7 7 LDAP Structure for SLS Management –Management tools are used to populate and maintain LDAP directory –Decision entities download classification rules, service specifications, and poll directory periodically. –Enforcement entities query rules from the decision entities and enforce them

8 8 LDAP Tree Structure in the Experiments

9 9 Experimental Setup Hardware:Hardware: –Server: dual Ultra-2 processors, 200 MHz CPUs, 256 MB main memory; server was bound to one of the CPUs. –Clients: Ultra1, 170 MHz CPU, 128 MB main memory –10 Mb/s Ethernet LDAP server:LDAP server: –OpenLDAP 1.2, Berkeley DB 2.4.14 –Stand -alone LDAP daemon (slapd) : front end handling communication with LDAP clients, and backend handling database operations. –LDBM backend: a high performance disk-based database –cachesize: size in entries of in-memory cache; variable size –dbcachesize: size in bytes of the in-memory cache associated with each open index file; 10 MB

10 10 Experimental Setup (cont’d) LDAP ClientLDAP Client

11 11 Search is likely to dominate the server operations, mainly test search performance for downloading policy rulesSearch is likely to dominate the server operations, mainly test search performance for downloading policy rules Search filter: interface address, and corresponding policy objectSearch filter: interface address, and corresponding policy object Default parameters:Default parameters: –Directory size: 10,000 entries –Entry size: 488 bytes Search operation steps: Search operation steps: –ldap_open, ldap_bind, ldap_search, ldap_unbind Test Methodology

12 12 Search Sequences

13 13 Performance Measures and Objectives Latencies:Latencies: –Connect: ldap_open + ldap_bind –Processing: ldap_search + result transmission –Response: ldap_open -> ldap_unbind (~connect+processing) Server throughput: requests served per secondServer throughput: requests served per second Objectives: use latencies and throughput to evaluateObjectives: use latencies and throughput to evaluate –Overall LDAP performance –Effect of individual system components on performance –System scalability and performance limits –Performance under update load –Measures to improve system performance

14 14 Overall Performance Average connection time, processing time, and response time Average server throughput

15 15 Components of LDAP Search Latency Client Server

16 16 Components of LDAP Connect Latency

17 17 Effect of Nagle Algorithm Average server throughput Average server connection, processing, and response time

18 18 Effect of Caching Entries Average server throughput Average connection, processing, and response time with 10,000 entry cache and without cache

19 19 Single vs. Dual Processor Average server throughput Average server connection, processing, and response time

20 20 Single vs. Dual Processor (cont’d) Read and write throughput

21 21 Scaling of Directory Size Average server throughput Average connection and processing time a)10,000 entries in DB, 10,000 in cache; b) 50,000 DB, 50,000 cache; c)100,000 DB, 50,000 cache

22 22 Scaling of Directory Entry Size (in-memory) Average server throughput Average connection and processing time (488 bytes vs. 4880 bytes)

23 23 Scaling of Directory Entry Size (out-of-memory) Average server throughput Average server connection time and processing time (488 bytes vs. 4880 bytes)

24 24 Connection Reuse Average server throughput Average server processing time (no reuse, 25 % reuse, 50% reuse, 75% reuse, 100% reuse)

25 25 Latency and Throughput for Search and Add Average server throughput Average server connect, processing, and response time

26 26 Related Work MindcraftMindcraft –Netscape Directory Server 3.0 (NSD3), Netscape Directory Server 1.0 (NSD1), Novell LDAP services (NDS) –10,000 entry personnel DB –Pentium Pro 200 MHz, 512 MB RAM –All experiments are in memory –Throughput NSD3: 183 requests/secondNSD3: 183 requests/second NSD1: 38.4 requests/secondNSD1: 38.4 requests/second NDS: 0.8 requests/secondNDS: 0.8 requests/second CPU is found to be the bottleneckCPU is found to be the bottleneck

27 27Conclusion General Results:General Results: –response latency 8 ms up to 105 requests/second –Maximum throughput 140 requests/second –5 ms processing latency - 36% from backend, 64% from front end –Connect time dominates at high load, and limits the throughput Disabling Nagle Algorithm reduces latency about 50 msDisabling Nagle Algorithm reduces latency about 50 ms Entry Caching:Entry Caching: –for 10,000 entry directory, caching all entries gives 40% improvement in processing time, 25% improvement in throughput

28 28 Conclusion (cont’d) Scaling with Directory Size - determined by back-end processingScaling with Directory Size - determined by back-end processing –In memory operation, 10,000 -> 50,000: processing time increases 60%, throughput reduces 21%. –Out-of-memory, 50,000 ->100,000: processing time increases another 87%, and throughput reduces 23%. Scaling with Entry Size (488 ->4880 bytes):Scaling with Entry Size (488 ->4880 bytes): –In-memory, mainly increase in front-end processing, i.e., time for ASN.1 encoding. Processing time increases 8 ms, 88% due to ASN.1 encoding, and throughput reduces 30%. –Out-of-memory, throughput reduces 70%, mainly due to increased data transfer time.

29 29 Conclusion (cont’d) CPU:CPU: –During in-memory operation, dual processors improve performance by 40%. Connection Re-use:Connection Re-use: –60% performance gain when connection left open.


Download ppt "1 Xin Wang Internet Real -Time Laboratory Internet Real -Time Laboratory Columbia University ( Joint work with Henning Schulzrinne, Dilip Kandlur, and."

Similar presentations


Ads by Google