[BBLISA] slow wan link

Edward Ned Harvey bblisa4 at nedharvey.com
Sat Jun 9 11:27:13 EDT 2012



> -----Original Message-----
> From: Bill Bogstad [mailto:bogstad at pobox.com]
> Sent: Friday, June 08, 2012 2:37 PM
> To: Edward Ned Harvey
> Cc: Daniel Feenberg; bblisa at bblisa.org
> Subject: Re: [BBLISA] slow wan link
> 
> On Fri, Jun 8, 2012 at 8:35 AM, Edward Ned Harvey
> <bblisa4 at nedharvey.com> wrote:
> >> From: Bill Bogstad [mailto:bogstad at pobox.com]
> >> Sent: Thursday, June 07, 2012 11:35 AM
> >>
> >> I'm going to have to disagree with this.   A congested link SHOULD
> >> drop TCP packets so that congestion control knows to slow down.
> >> It's actually this thinking which results in deploying equipment and
> >> software that creates buffer bloat.
> >
> > What are you calling buffer bloat?
> 
> I'm (attempting) to use the term as I understand Jim Getty does.   He
> coined the term in 2010 on his blog:
> 
> http://gettys.wordpress.com/2010/12/03/introducing-the-criminal-
> mastermind-bufferbloat/
> 
> If you aren't familiar with ongoing work on this issue, you shouldn't
> have trouble finding information about it.   This recent CACM article
> by Jim Gettys and Kathleen Nichols might be a good place to start:
> 
> http://cacm.acm.org/magazines/2012/1/144810-bufferbloat/fulltext
> 
> In addition, the Linux 3.3 kernel has new code in it to attempt to
> start dealing with the problem.  I'm not going to respond to the rest
> of your note until you let me know that I'm using the term incorrectly
> or that it isn't relevant to a discussion about whether networks
> should ever drop packets.

I don't believe you're using the term incorrectly, and I don't think it's
irrelevant to discussion.  In the above mentioned links, he's saying they're
using packet drop as a form of flow control.  He calls it "congestion
notification via packet drop," and I'm the one who's calling that a form of
flow control.

What would normally be called "Flow control" is a lower level (layer 2, mac)
network control.  When the receiver buffer is getting sufficiently full, the
receiver sends the PAUSE frame (or newer more powerful alternatives) to the
sender, on the local network.  So the sender, in hardware, knows to back off
a little bit.  The application, and even IP layer don't need to know about
it.  No dropped packets.  Perfect efficiency and optimization.

While "congestion notification via packet drop" may work (not necessarily
well) for TCP, it would be fatal (or dramatically harmful) to the user
experience if you drop things like DNS queries.  DNS timeouts are really
painful.  As I said, my first clue, usually in diagnosing that problem, is
when users complain that webpages won't load, or the page loads and some of
the graphics don't load.  Or, if the loss is too prolific, the users
complain that their ssh sessions and vpn sessions are dying.

Shifting gears a little bit, let's state some assumptions:

Let's make the assumption that you have a comparatively fast LAN, and
whatever your LAN speed is, your WAN is slower and shared.  So we're not
talking about congestion on the LAN, we're talking about congestion at the
perimeter chokepoint.  Your perimeter chokepoint can distribute data to the
LAN as fast as it comes in from the WAN, so there is never a full buffer
there.  But there will be queueing taking place for outbound traffic from
LAN to WAN.  This means, there is only one buffer and only one chokepoint
that matters.  It's the LAN receive buffer (queueing to outbound) on the
perimeter router/firewall.  

If the firewall needs to explicitly signal the LAN sender about congestion,
by sending a signal to the LAN client, that's ok.  This could be a PAUSE
frame, or any other form of layer-2 flow control.  No packet loss.
Completely controlled by the firewall and the switch.

If the firewall drops a TCP packet, without notifying the LAN client, then
the LAN client will eventually discover the packet loss by TCP
communications with the remote host, and the LAN client will retransmit.
The retransmit itself doesn't cost anything, because again, that's just LAN
traffic, not affecting the actual point of bottleneck.  The thing that is
costly is the fact that the LAN client didn't know it needed to retransmit
until much later, when it eventually figured it out from the remote host's
TCP signaling.

If the firewall drops some other packet - UDP, ICMP... Then the sender will
never know.  Some traffic (streaming audio/video) will never know, and will
never care.  But others (ping, dns) will certainly know, and could
dramatically care.

Dropping packets is not the answer.  Flow control is the answer.



More information about the bblisa mailing list