Presentation is loading. Please wait.

Presentation is loading. Please wait.

Improving application layer latency for reliable thin-stream By: Joel Fichter & Andrew Sitosky Src:

Similar presentations


Presentation on theme: "Improving application layer latency for reliable thin-stream By: Joel Fichter & Andrew Sitosky Src:"— Presentation transcript:

1 Improving application layer latency for reliable thin-stream By: Joel Fichter & Andrew Sitosky Src: http://portal.acm.org/citation.cfm?doid=1517494.1517513

2 What is the problem? The speed of an online video game depends entirely on the network connections between the client and the server (which we will refer to as a stream). This means that slower connections can make playing online video games difficult and can create what is commonly known as “LAG”. In an example to be shown later, the speed and success at which a client can receive and send packets can make the game worse by perceiving players where they may not be.

3 The Basics Many online games transport packets through what is known as a “thin-stream” connection. To qaulify as a “thin” stream it must follow one of two rules: – 1. packets arrive to fast to trigger fast retransmission. – 2. the size of the packets are significantly lower than the maximum segment size.

4 Attempts made TCP's existing retransmission scheme relies on 3 duplicate ACKs before it initiates a fast retransmission. In thin stream traffic, not enough packets are being sent to trigger this fast retransmit; retransmissions are instead caused by packet timeouts, leading to still greater delay between packet arrivals. This approach is unsuited for online games due to the time sensitive nature of the data.

5 The Solution The proposed solution involves modifying the TCP protocol so that it handles “thin-stream” traffic differently than other streams. There were three different modifications made to the TCP protocol:

6 Removal of Exponential Backoff One of the main disadvantages of thin stream traffic for online games is the increasing length of timeout windows due to exponential backoff. The longer wait between retransmissions means that less packets are being sent at once, decreasing the likeihood of a fast retransmission

7 Continued… In response, the exponential backoff function is disabled whenever there are less than four packets in transit. This means that, under these conditions, timeouts will occur sooner and more packets will be sent, increasing the chances of a fast retransmission due to three duplicate ACKS.

8 Faster Fast Retransmit Due to the length of time often required during thin stream traffic for the client to receive three duplicate ACKs, the TCP modifications decrease the number of duplicate ACKs needed to trigger a fast retransmit from three to one, so long as there are less than four packets in flight. This causes more packets to be sent, raising the number of packets currently in flight above for and causing normal fast retransmission requirements to be reinstated.

9 Redundant Data Bundling Because the packets in thin stream traffic are below the Maximum Segment Size (MSS) for TCP packets, redundant data bundling allows data from unacknowledged packets currently in the send buffer to be copied into the unused space of the packet being currently sent.

10 Continued… This increases the likelihood of the packet containing data that was lost and has not yet been ACKed by the receiver, reducing unnecessary retransmissions.

11 Testing This theory was tested using five machines. – One designated to run the server application – Another to act as a network emulator – And 3 that act as clients (The machines are running bots to emulate actual game play) – The 3 clients ran 26 bots to simulate a full BzFlag game The experiments were run with an average loss rate of 2% and a RTT time of 130 ms This is shown in figure 1.

12 Visual Representation

13 Testing… The online game BzFlag was chosen because its packet Interarrival time was low enough to be qualified as a “thin-stream” game. This game is played on using tanks on a 200X200 unit field (this is using BzFlag measurement units) which is referred as coordinates w and u

14 The Results The results of this test were shown to improve the speed of the game. Specifically, the latency can create a disturbance between where other players are perceived to be and their actual location.

15 Example of Perceived Position A is an opponent trying to shoot another opponent B is the position that opponent A sees them to be. And B’ is the actual position of opponent B

16 Hit limit Calculation This calculation is used to show where a “Perfect Shot” can be affected from the perceived deviation.

17 How this affects the game Using this we can examine the difference in latency between the modified and unmodified TCP protocol. Notice that the modified TCP has a consistently higher chance to hit a “perfect shot”.

18 Conclusion We believe the author conducted a thorough and accurate small approach to a much larger problem. They used 5 machines, designating 3 as clients and one as a server application, and one as a network emulator between the clients and server. Using this they were able to emulate both modified and unmodified TCP connections and from the data prove that the modified TCP connection could overall improve the online gaming community.

19 What We Learned (technical) We learned that the latency affects the player's view of the game world. By enacting relatively small changes to the transport protocol used by a game, the rate of successful packet arrival can be improved and the latency decreased.

20 What We Learned (non-technical) We learned the form and structure of a formal paper as well as the research and testing that goes into writing one.

21 Source All data was provided by Improving application layer latency for reliable thin-stream game traffic Authors: Andreas Petlund Kristian Evensen Pål Halvorsen Carsten Griwodz


Download ppt "Improving application layer latency for reliable thin-stream By: Joel Fichter & Andrew Sitosky Src:"

Similar presentations


Ads by Google