Neil Sinhababu and Nicholas Beaudrot's political blog
Thursday, January 6, 2011
Fear Not The Buffer Bloat (Nerd Alert)
Kevin Drums digs up this blog post from Bob Cringley about "bufferbloat", who alleges that Windows 7, OS X, and other fancy new technologies are going to destroy the Internet by inadvertently causing logjams at various points between, say, your Xbox 360 that streams Netlix content and the actual Netflix content. We're going to take a brief digression from policy and politics to talk a little bit about the fact that yes, Virginia, the Internet is getting faster, and Win7/OSX won't destroy it. This is a long and drawn-out story, but there are two main components to it.
Web browsing feels slower because web pages today are doing much, much more than web pages as little as five or ten years ago.
The protocols governing the transfer of data across the internet were designed, as Cringley points out, in the mid-1980s. At that point, the Internet was used to send large files over slow networks. Today, most web traffic consists of short files being sent over fast networks. (Streaming video is somewhat different, but we'll set that aside). Modern web browsers and quality websites play tricks to try to compensate for this, but there's only so much they can do.
The people who build large websites, web browsers, and networks are not idiots, and they're hard at work figuring out how to deal with this. If you really want to get your nerd on, here's a 25-minute presentation by Google's Urs Holzle where he goes through some ideas on how to make things drastically faster simply by altering protocols:
It's like that Beatles song about things getting better all the time.
5 comments:
Anonymous
said...
Cringley is summarizing https://gettys.wordpress.com/category/bufferbloat/ and you're summarizing Cringley. Major issues lost in the compression. Changing protocols isn't the answer, it's a mirage.
Okay, having read an awful lot Gettys he appears completely obsessed with the server/network side of the equation and completely oblivious to behavioral changes on the client side.
Separately, it is true that absurdly large buffers lead to bad behavior in pathological cases. But most of the time things are not pathological. You can induce terrible latency in application A if application B is saturating the uplink (e.g. transmitting a file). Horrors! But, how often does that happen? Don't watch video and bittorrent at the same time, people. This is not rocket science!
Also, anyone who thinks 20-50 mbps is a "typical" broadband connection is severely divorced from reality.
Lastly, protocol changes are *an* answer, not *the* answer. Cutting down on buffering would probably help too, but it's not going to solve all the worlds problems. In fact, for the typical use case, I bet throughput *is* more important that latency (in that latency will be limited by factors other than the fact that "something else is saturating the network").
> I bet throughput *is* more important that latency
Gettys makes the point that latency and throughput are interrelated, and that super-high latency is what is wrecking thoughput. He backs this up with hard data. He's doing simple file copies on a single TCP connection and that clobbers his whole network connection so that other, low-bandwidth services like Ping can't get through, even though the file copies are themselves experiencing terrible throughput.
In other words, if you can't get packets through, a client-side protocol change isn't going to help you.
BTW if my arguments and Getty's experimental proof don't persuade you, read this part of https://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/ again.
"At this point, I worried that we (all of us) are in trouble, and asked a number of others to help me understand my results, ensure their correctness, and get some guidance on how to proceed. These included Dave Clark, Vint Cerf, Vern Paxson, Van Jacobson, Dave Reed, Dick Sites and others. They helped with the diagnosis from the traces I had taken, and confirmed the cause."
Okay, so in general, there are two different problems that need separating.
One is that "web pages feel like they are slower to load than the late-90s". The issues for these people in the first group are **overwhelmingly** related to increases in complexity of web pages without any improvement in page construction techniques, resulting in lots of stalling at the browser level. You can read Steve Souders or any number of talks from Velocity to get the gist of this. Changes at any level of protocol can result in drastic performance improvements here. At present, after general page cleanup, the next step in performance improvments is to play lots of tricks (domain sharding, spriting, etc.), that are in effect a poor man's substitute for protocol changes. All of these tricks and protocol changes are geared at improving the throughput *of content* (i.e. having more content with less packet overhead and starvation of the browser) in situations with *relatively small file transfers that need to occur quickly*. Note that this has almost nothing to do with the problem Gettys faces.
The second problem is the bufferbloat problem as described by Gettys. My claim is that this is a "doctor, it hurts when I do this! / well, don't do that, then!" problem. Seriously, if you are unhappy with your experience in interactive applications while uploading large files ... how about *don't upload large files from your residential-grade connection*, or *get commercial-grade internet*. The typical home user is not uploading large files with any regularity. In fact, since media streaming usually has asymmetric transfer rates, it's unclear whether media streaming would even reproduce the problem Gettys faces. It is perfectly sensible for carriers to optimize for more common use cases, and for residential-grade connections, Gettys just isn't a common use case. It's possible that carriers could do better QoS in the Gettys scenario, but if you called and complained about it and the rep said "you should consider a small business 3 up/3 down plan" I'd say that's a little toolish and lazy of them, but not necessarily an evil decision on their part.
To drive this point home, while BitTorrent and other P2P applications are a major contributor to upstream network bytes, 90% of upstream bytes are generated by the top 20% of internet subscribers, according to the Sandvine fall 2010 usage report:
Large file transfers just aren't the common use case for home users. Fat tree networks are a little broken, but not as much as Gettys et al. would have us believe.
5 comments:
Cringley is summarizing https://gettys.wordpress.com/category/bufferbloat/ and you're summarizing Cringley. Major issues lost in the compression. Changing protocols isn't the answer, it's a mirage.
Okay, having read an awful lot Gettys he appears completely obsessed with the server/network side of the equation and completely oblivious to behavioral changes on the client side.
Separately, it is true that absurdly large buffers lead to bad behavior in pathological cases. But most of the time things are not pathological. You can induce terrible latency in application A if application B is saturating the uplink (e.g. transmitting a file). Horrors! But, how often does that happen? Don't watch video and bittorrent at the same time, people. This is not rocket science!
Also, anyone who thinks 20-50 mbps is a "typical" broadband connection is severely divorced from reality.
Lastly, protocol changes are *an* answer, not *the* answer. Cutting down on buffering would probably help too, but it's not going to solve all the worlds problems. In fact, for the typical use case, I bet throughput *is* more important that latency (in that latency will be limited by factors other than the fact that "something else is saturating the network").
> I bet throughput *is* more important that latency
Gettys makes the point that latency and throughput are interrelated, and that super-high latency is what is wrecking thoughput. He backs this up with hard data. He's doing simple file copies on a single TCP connection and that clobbers his whole network connection so that other, low-bandwidth services like Ping can't get through, even though the file copies are themselves experiencing terrible throughput.
In other words, if you can't get packets through, a client-side protocol change isn't going to help you.
BTW if my arguments and Getty's experimental proof don't persuade you, read this part of https://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/ again.
"At this point, I worried that we (all of us) are in trouble, and asked a number of others to help me understand my results, ensure their correctness, and get some guidance on how to proceed. These included Dave Clark, Vint Cerf, Vern Paxson, Van Jacobson, Dave Reed, Dick Sites and others. They helped with the diagnosis from the traces I had taken, and confirmed the cause."
Okay, so in general, there are two different problems that need separating.
One is that "web pages feel like they are slower to load than the late-90s". The issues for these people in the first group are **overwhelmingly** related to increases in complexity of web pages without any improvement in page construction techniques, resulting in lots of stalling at the browser level. You can read Steve Souders or any number of talks from Velocity to get the gist of this. Changes at any level of protocol can result in drastic performance improvements here. At present, after general page cleanup, the next step in performance improvments is to play lots of tricks (domain sharding, spriting, etc.), that are in effect a poor man's substitute for protocol changes. All of these tricks and protocol changes are geared at improving the throughput *of content* (i.e. having more content with less packet overhead and starvation of the browser) in situations with *relatively small file transfers that need to occur quickly*. Note that this has almost nothing to do with the problem Gettys faces.
The second problem is the bufferbloat problem as described by Gettys. My claim is that this is a "doctor, it hurts when I do this! / well, don't do that, then!" problem. Seriously, if you are unhappy with your experience in interactive applications while uploading large files ... how about *don't upload large files from your residential-grade connection*, or *get commercial-grade internet*. The typical home user is not uploading large files with any regularity. In fact, since media streaming usually has asymmetric transfer rates, it's unclear whether media streaming would even reproduce the problem Gettys faces. It is perfectly sensible for carriers to optimize for more common use cases, and for residential-grade connections, Gettys just isn't a common use case. It's possible that carriers could do better QoS in the Gettys scenario, but if you called and complained about it and the rep said "you should consider a small business 3 up/3 down plan" I'd say that's a little toolish and lazy of them, but not necessarily an evil decision on their part.
To drive this point home, while BitTorrent and other P2P applications are a major contributor to upstream network bytes, 90% of upstream bytes are generated by the top 20% of internet subscribers, according to the Sandvine fall 2010 usage report:
http://www.sandvine.com/downloads/documents/2010%20Global%20Internet%20Phenomena%20Report.pdf
Large file transfers just aren't the common use case for home users. Fat tree networks are a little broken, but not as much as Gettys et al. would have us believe.
Post a Comment