Download presentation
Presentation is loading. Please wait.
Published byPierce Thornton Modified over 8 years ago
1
Operation System Support for Multi-User, Remote, Graphical Interaction Alexander Ya-li Wong, Margo Seltzer Harvard University, Division of Engineering and Applied Sciences Kim, Byeong Gil Software & System Laboratory @ kangwon Natl. Univ.
2
Content Abstract Introduction Background of X Windows and TSE Approach Processor, Memory, Network User Behavior Compulsory Load, Dynamic Load Latency Conclusion
3
Abstract Processor and memory scheduling algorithms are not tuned for thin client service. Under heavy CPU and memory load, user- perceived latencies is up to 100 times. TSE’s network protocol outperforms X by up to six times. Bitmap cache is essential for handling dynamic elements of modern user interfaces. Use of bitmap cache can reduce network load by up to 2000%
4
Introduction Modern computer system architecture Allow the processor, memory, disk, and display subsystems to be spatially extruded throughout network. Thin Client consider cost and manageability interest in X Windows-like schemes introduction of thin client service into major commercial operating systems. accelerate as consumer products.
5
Background X WindowsTSE Library Xlib GUIWin32 GUI Class User-levelPass through the kernel Multi-user YES Protocol XRDP Compression & caching None Toolkit-specific, Usually none RLE Memory & Disk platforms Windows, Unix Macintosh Windows Unix (via third-party add-ons)
6
Background (con’t) LBX (Low Bandwidth X) is a protocol extension to X is implemented as a proxy takes normal X traffic Applies various compression techniques
7
Approach What is the maximum number of concurrent users and what impact on users yields this maximum value? Interactive Latency, not throughput Latency is a key performance criterion. ( “Using Latency to Evaluate Interactive System Performance” by Y., Wang, Z., Chen – 1996) Multi-User Benchmarking on the multi-user system Graphical need to consider with respect to latency Remote Access The efficiency of the network protocol
8
The key Role of Latency Lantency characteristic Latency tolerances for continuous operations are lower than for discrete operations. Humans are irritated by latencies 100ms or greater. ( “Providing A Low Latency User Experience In A High Latency Application” by Holden, L. – 1997) Degree factor of latency continues to increase for any operation the number of operations that induce perceptible latency increases when perceptible latency continually changes
9
Effect factors of latency Hardware resources relevant the processor, memory, disk, network resource scarcity - the speed of the memory hierarchy level Operating system structure bad scheduling decisions inefficient context switches poor management of resource contention User behavior Hardware resource limitations
10
Experimental Testbed Composition Server - 333MHz Intel Celeron system - 96MB SDRAM - 4GB IDE hard disk - Bay Networks NetGear FA-310 Ethernet adapter Client - Intel Pentium II-400 - 128MB SDRAM - 11GB IDE hard disk - 3Com Ethernet adapter Network listening host - Intel Pentium-233 - 96MB EDO RAM - 2GB IDE hard disk - 3Com Ethernet adapter
11
Processor - Behavior From Behavior to Load Multi-user support incoming session connections Additional per-user kernel state Ownership information Remote-access support Interface operations pass through the network subsystem Compulsory Load are inherent in the operating system
12
Processor - Load TSE observe greater overall idle-state CPU activity. Listen for and handle incoming client connections Session state management - NT Virtual Memory, Object, and process managers
13
Processor - Latency
14
Dynamic Latency Methodology Sink C program - never voluntarily yields the processor. - should increase the scheduler queue length by one Testing program - TSE : Notepad - X Windows : vim Action - to engage character repeat on the client machine - the rate of which was set at 20Hz Measurement - using tcpdump
15
Dynamic Latency - Results No load the server Sending a message to the client every 50ms Load the sever
16
Memory From Behavior to Load Compulsory Load Dynamic memory usage of the kernel -The system is idle with no user sessions. -17MB for Linux and 19MB for TSE Memory usage of each user session -To be a minimal login with no additional user activity
17
Memory – Compulsory Load (con’t)
18
Memory - Latency From Load to Latency Opened a simple text editing application remotely TSE avg is about 40 times the threshold Linux avg is about 11 times the threshold
19
Network How user behavior generates network load compare the ability of RDP, X, LBX Growing usage of animation How network load translates to user-perceived latency Importance of network protocol efficiency Terms “channel” – stream of network messages between the client and server “Input channel” – stream from the client to the server
20
Network - Behavior From Behavior to Load depends on the design and implementation of the user interfaces increasing richness and sophistication of graphical interfaces is becoming increasingly network intensive
21
Network - Load Compulsory Load session negotiation and initialization Any network traffic is exchanged after session setup Session setup costs -TSE : 45,328 bytes -Linux/X : 16,312 bytes Dynamic Load RDP, X, LBX Testing Environment -Corel WordPerfect -Gimp -Netscape Navigator prototap – protocol tracing software based on the tcpdump
22
Network – Load (con’t) RDP is the most efficient protocol less than 25% of LBX and less than 15% of X Message of LBX is to be compressed. Average message size is just 209 bytes.
23
Network – Load (con’t) Virtual-IP (VIP) omit the IP header can reduce overhead LBX has the smallest average msg size. still be less than half as efficient than RDP
24
Network – Animations: Bitmap Caching RDP outperforms LBX and X. X and LBX does not support bitmap caching. TSE client reserves 1.5MB of memory for a bitmap cache using an LRU eviction policy
25
Network – Cache Effectiveness and CPU Load is not only critical to reducing network load, but also processor load at the server.
26
Network – cache Looping animations For values 25 ~ 65 - 0.01 Mbps For all values above 65 - 0.96Mbps LRU is the wrong scheme for handling looping animations.
27
Network - Latency Demonstration Two simple C programs - establish a TCP connection and send and receive random data Ran ping for 60 sec. Took the average and variance in RTT. Default ping size is 64 bytes ( keystroke size)
28
Network – Latency (con’t)
29
Conclusion Latency is the paramount performance Highlights important issues relevant to thin client performance Resource scheduling is not well optimized Resource saturation rise well above human- perceptible levels performed a detailed comparison of the RDP, X and LBX protocols RDP is more efficient in terms os network load ( animated UI elements)
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.