HTTP/2 – The Missing Summary

HTTP/2 – The Missing Summary

We know our browsers support it, and we know AWS, Akamai, and other big players support it within their infrastructures already…but do your applications and on-prem infrastructure have what it takes to leverage the awesomeness that is HTTP/2?

Think about it. We went from HTTP/0.9, to HTTP/1.0, to HTTP/1.1…to a full version upgrade of HTTP/2. That alone should tell you that there are some very interesting features lurking underneath the hood. This is the missing in-depth summary that you have been needing while being overloaded with too much information when reading official specs and getting lost in sensory-overload-causing diagrams and finally giving up. Welcome to HTTP/2 – the missing summary.

The best part about it? This guy is fully backwards compatible from an end-user perspective. He’s like a PS3 — able to play PS2 games, albeit with shittier quality, but still comes through in a pinch when you need to play Tony Hawk’s Pro Skater 3 — just like whenever you feel like visiting the official Space Jam website from 1996. HTTP/2 is completely transparent to the client, whose browser can automatically downgrade a connection to HTTP/1.1 without skipping a beat.

But the real secret sauce has somehow remained so secret that it’s not being adopted at a very fast rate (which is most likely due to the ongoing support of legacy applications and infrastructure). But just so you know — this shit is the way of the future. We’re not talking IPv6 here — this is production-ready now and is fully supported by all major browsers. The only thing lacking is the adoption from developers, which at this point should be the standard moving forward. HTTP/2 has the ability to revolutionize applications by increasing overall performance by up to 55% — that’s money in the bank when we’re talking large-scale infrastructure.

This article is not meant to be an in-depth, low-level technical spec on the protocol itself. I will link to some resources later if you want to dive into all that mess, but this is meant to be a somewhat-brief, mid-level summary of the main features of HTTP/2 with enough information that you could speak somewhat-intelligently on the topic without making a total fool of yourself. These are the things you should know when somebody asks you, “what is HTTP/2,” and you are like, “it’s the next version of HTTP after 1.1,” and they are like, “oooh, interesting — now tell me more.


The main features of HTTP/2


  • Full request and response multiplexing
  • Compression of HTTP header fields
  • Request prioritization
  • Server push

That being said, if you are a developer, you would be wise to jump on the HTTP/2 bandwagon to leverage the amazing capabilities because it is not backward compatible from a TCP client/server perspective because of the new binary framing layer, hence the major jump from version 1.1 to version 2.


HTTP/2 vs SPDY: FINISH HIM!

SPDY is to Google as Blackberry is to RIM. They were somewhat revolutionary in their time, but the ideas and technologies were swallowed up by giant fish. To further this horrible analogy, you have Apple, who basically took a revolutionary and innovative existing technology, and built it out to its full potential (the iPhone), which pretty much spelled the end for Blackberry. Well, the same goes for HTTP/2 and the HTTP Working Group (HTTP-WG) — they built off of all the existing research and technology by Google that went into creating SPDY, and leveled that bitch up to a full standard protocol, using the SPDY spec as the starting foundation to their grand masterpiece. So if you are wondering why you haven’t heard a lot about SPDY lately, it’s because it’s literally HTTP/2 now.


The major shortcomings of HTTP/1.x


  • Clients need to use multiple connections in order to achieve concurrency and reduce latency, which comes with a steep sacrifice to performance
  • No header compression, which causes unnecessary network traffic
  • No resource prioritization, which doesn’t take full advantage of the underlying TCP connection

HTTP/2 enables a more efficient use of network resources and a reduced perception of latency by introducing header field compression and allowing multiple concurrent exchanges on the same connection… Specifically, it allows interleaving of request and response messages on the same connection and uses an efficient coding for HTTP header fields. It also allows prioritization of requests, letting more important requests complete more quickly, further improving performance.

The resulting protocol is more friendly to the network, because fewer TCP connections can be used in comparison to HTTP/1.x. This means less competition with other flows, and longer-lived connections, which in turn leads to better utilization of available network capacity. Finally, HTTP/2 also enables more efficient processing of messages through use of binary message framing. 

(Hypertext Transfer Protocol version 2, Draft 17)

Request and Response Multiplexing

Previously, in HTTP/1.x, if a client wanted to make parallel requests to a server, multiple connections had to be used. Additionally, only one response could be received at a time (response queuing). Because of HTTP/2’s new binary framing layer, req/res multiplexing enters the picture, which actually allows both the client and server to break down an HTTP message into individual frames, interleave them, and reassemble them on the other end. This is by far the most important feature of HTTP/2. It allows interleaving for multiple requests and responses in parallel, and without blocking on any one — all while using a single TCP connection. The future is NOW, folks.


Stream Prioritization

Since HTTP/2 messages can be split into many streams, it also allows for prioritization and dependency associations with individual streams. This allows your most critical tasks to take the resources they need, as well as any normal tasks that depend on them (which can take slightly less resources) while normal tasks can take a limited amount of resources and still function normally…and all of this is implemented in a way that is non-blocking.


One Connection Per Origin

By utilizing the new binary framing format, you no longer need to utilize multiple TCP connections to multiplex streams in parallel. Only a single, persistent TCP connection is required per origin, which has some amazing performance benefits. With HTTP/1, an average of 74% of connections were tied up with a single transaction, as opposed to 25% with HTTP/2, which allows for an enormous reduction in overhead. By reusing the same persistent connection, HTTP/2 makes better use of the underlying TCP protocol, which was actually designed for that type of behavior, and by using fewer connections, the resource footprint is drastically reduced. In effect, this improves network utilization and capacity, thus having a dramatic effect by reducing network latency, improving throughput, and reducing overall operational costs.


Flow Control

I won’t dive very deep into this one, but basically, just think of TCP flow control. It’s essentially the exact same thing, just at a way more granular level. Since multiple HTTP streams can be multiplexed inside a single TCP connection, HTTP/2 flow control is able to utilize application-level APIs to control the amount of data being sent or received within a particular stream. This is a somewhat open-ended feature of HTTP/2 that doesn’t hash out exactly how it should be managed, but rather provides some basic strategies for developers to build upon in order to create their own flow control mechanisms that best fit their own particular use cases.


Server Push

This is an interesting feature that allows a server to push multiple responses for a single client request. In other words, a client sends a request, and the server provides the client with all of the resources related to that request. So let’s say a client requests index.html, the server could send the dependencies along with the initial request, which could also include script.js and style.css. You can actually think of it in terms of inlining all of your JavaScript and CSS assets, except they are still sent as separate, organized responses instead of a jumbled up mess of code within the HTML. This is obviously an extremely basic example of a functionality that will open up a whole new world of possibilities, in effect, changing our whole definition of the HTTP protocol and how it can be used.


Header Compression

HTTP/1.x header metadata is sent in plain text and adds 500-800 bytes of overhead (very generously-speaking) to every single transfer. When we’re talking cookies, this can easily reach into the kilobytes range, and I personally encountered the normal 4k average browser limit to be exceeded on multiple occasions, specifically because of this, which caused the obvious broken sessions, which were always difficult to troubleshoot.

HTTP/2 is able to reduce this overhead by compressing the request and response header metadata using the HPACK compression format. In effect, this reduces the individual transfer size, in addition to establishing a shared compression context between client and server, meaning they both stay updated on previously-seen header fields in order to efficiently encode previously-transmitted header values.

Back to ol’ SPDY, who used a zlib compression algorithm for its header compression, and as a result, we had the CRIME attack, which surfaced in 2012, and allowed for session hijacking, thus the idea for replacing zlib with the more-secure HPACK.


This ended up being a lot longer than intended…

However, HTTP/2 is a loaded topic. Additionally, given the somewhat slow adoption rate due to legacy application support, there is a pretty decent exploit that uses HTTP/2 connections along with a proxy to connect with HTTP/2 and then downgrade to HTTP/1.1 (or vice-versa — I don’t specifically recall at the moment) and essentially navigate a site unnoticed without leaving any tracks, effectively making you an HTTP ghost. I attended a Skytalk about it at DEF CON 27 a couple months ago, and I’ll try to follow up with more info and possibly a proof of concept if anything has been published on it yet.


Additional Reading:

Leave a Reply