===== Review =====

*** Recommendation: Your overall rating.
Accept if room (top 30% but not top 15%, borderline for INFOCOM) (3)

*** Contributions: What are the major issues addressed in the paper?
Do you consider them important? Comment on the degree of novelty,
creativity, impact, and technical depth in the paper.
Paper presents a model to show that interactive flows can improve
latency by reducing the number of timeouts experienced. Presents a
number of techniques to reduce the number of timeouts.

*** Strengths: What are the major reasons to accept the paper? [Be
brief.]
Shows that timeouts can increase the latency of interactive flows
significantly.

*** Weaknesses: What are the most important reasons NOT to accept the
paper? [Be brief.]
Presented solutions are straightforward.

*** Detailed Comments: Please provide detailed comments that will be
helpful to the TPC for assessing the paper. Also provide feedback to
the authors.
Paper argues that interactive flows suffer higher latency due to
timeouts, compared to backlogged flows that can observe packet losses
through duplicate acks.

Paper then proposes a number of techniques to make the interactive
flows observe packet losses through dupacks instead of timeouts.
Studied solutions are straightforward and this reviewer could think of
them immediately once the problem is stated.

TCP friendliness does not mean that you take as much bandwidth as a
backlogged TCP flow, it means you take "no more". Here the authors
seem to take it to mean "not less".

A number of TCP modifications have been put forward in the recent past
for high-speed links and other situations. A number of these proposals
modify the congestion control response or other parameters of TCP. It
is not clear these modifications are TCP-friendly either, in the sense
that they will only take as much as another "regular TCP" flow. Linux
has adopted one of them (Bic) apparently as a default.

Clearly sending dummy packets to keep the flow look like a backlogged
flow may keep pipes unnecessarily full when there are simpler
approaches to getting better latencies such as Approach II and III.
But, even Approach I may not be any more "selfish" or "harmful" than
other protocols that are tweaking congestion parameters. I am not
advocating that the fully backlogged approach is harmless, but that
the statement of the problem needs to be tempered in the larger
context (when UDP flows, tweaking of congestion control parameters and
others are considered).

It may be interesting to see other parameters such as link loss ratios
in Fig. 8.

===== Review =====

*** Recommendation: Your overall rating.
Likely accept (top 15% but not top 10%, significant contribution) (4)

*** Contributions: What are the major issues addressed in the paper?
Do you consider them important? Comment on the degree of novelty,
creativity, impact, and technical depth in the paper.
This paper is concerned with the problem that short TCP flows may
suffer from severe response time degradation during network
congestion. Several approaches to overcome this problem has been
proposed in literature, but, as argued by the authors, these do not
work for very small flows (consisting of only one or two packets) or
require considerable architectural modifications in the network. Using
analytical modelling and simulations it is shown that an alternative
approach, based on sending backlogged 'dummy' packets (besides the few
'real'ones) within the same TCP connection, improves the response time
performance and does not require architectural network modifications.
This would create a clear incentive for 'misbehaviour' by users of
interactive applications, which may lead to considerable additional
network load. Next, however, it is shown that other easy-to-implement
techniques for optimizing interactive TCP applications, which are more
network friendly, perform better and hence take away the incentive for
misbehaviour.

In principle, the idea behind the main approach (and also one of the
alternatives) presented in this paper is to let the retransmission of
a lost packet be triggered at an early stage by the receipt of a
triple duplicate ACK instead of expiration of the (relatively long)
retransmission time out timer (which is usually the case for very
short flows, in particular if the lost packet is the only one within
the TCP connection). A third approach is based on transmitting a
number of copies of a packet such that the probability that at least
one of the copies will make it to the receiver is high.

The paper is largely well written. The issue of improving the
performance of interactive TCP flows is relevant and the proposed
approaches are certainly interesting (I can not fully judge the
originality of these ideas). The numerical results are well presented
and discussed.

The analysis used to assess and compare the performance of the
different approaches builds largely on existing models for TCP
performance, e.g. [Padhye et al.]. The main contribution of the paper
consists in my opinion of (i) some new (?) interesting ideas for
improving response time performance of short TCP flows, and (ii)
useful insights that are provided in the benefits (from the user point
of view) and drawbacks (additional resource usage, implementation) of
the proposed methods.

*** Strengths: What are the major reasons to accept the paper? [Be
brief.] The issue of improving the performance of interactive TCP
flows is certainly relevant and the proposed approaches are
interesting (I can not fully judge the originality of these ideas).
Substantial technical contribution (see above). I expect that this
paper will inspire further research in the proposed direction.

*** Weaknesses: What are the most important reasons NOT to accept the
paper? [Be brief.]
The mathematical modeling is a little bit 'tricky' (the eventual
approximation builds on many 'sub-approximations' and assumptions) and
does not really contain new ideas (it builds largely on existing
models, e.g. the well known one of [Padhye et al.]).

*** Detailed Comments: Please provide detailed comments that will be
helpful to the TPC for assessing the paper. Also provide feedback to
the authors.
The analysis and simulations are done for the specific (nevertheless
very important!) case that the short TCP flow consists of only one
packet ('interactive flow'). What about the effectiveness of the
proposed approaches for 'other' short TCP flows of, say, two or three
packets.

===== Review =====

*** Recommendation: Your overall rating.
Likely accept (top 15% but not top 10%, significant contribution) (4)

*** Contributions: What are the major issues addressed in the paper?
Do you consider them important? Comment on the degree of novelty,
creativity, impact, and technical depth in the paper.
This paper investigates how misbehaving interactive users of TCP can
improve the performance they can obtain by injecting dummy data into
the network during times at which they have too little data to send.
The paper quantifies the results both analytically and through
simulations and presents three different techniques to improve
interactive performance without the need to inject dummy traffic.

*** Strengths: What are the major reasons to accept the paper? [Be
brief.] Overall, this is a very solid and timely paper. A number of
interactive applications over TCP, including some VoIP applications
and games, has already started to deploy "creative" transmission
schemes to improve their performance. This paper both analyzes the
dangers of these approaches and offers a number of alternatives that
are less harmful to the network.

*** Weaknesses: What are the most important reasons NOT to accept the
paper? [Be brief.]
The impact of three alternatives should be investigated in more
detail, as they each have some undesirable features (making
retransmissions a bit more aggressive, injecting some additional
traffic, doubling the traffic volume).

*** Detailed Comments: Please provide detailed comments that will be
helpful to the TPC for assessing the paper. Also provide feedback to
the authors.
The analysis and simulations are based on TCP/Reno, not even NewReno.
Given that all major stacks have enabled SACK by default for a few
years now, it would be interesting to see how much incentive exits for
applications to cheat over a SACK connection. Due to SACK's more
precise retransmission behavior, some of the proposed mitigations may
also lead to different results with SACK.

The packet loss rates in the analysis and simulation go from 1% up to
10% (and even 25% in Figure 4). TCP behavior is pathological with loss
rates this high. It would also be interesting to see the behavior for
lower loss rates. (Minor point: y-axis doesn't start at zero in Figure
4 and 5, and x-axis ranges are different between subfigures a and b
between Figures 4 and 5, making comparison difficult.)

===== TPC Review =====