HTTP/1.0 vs HTTP/1.1 vs HTTP/2

Posted parkdifferent

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了HTTP/1.0 vs HTTP/1.1 vs HTTP/2相关的知识,希望对你有一定的参考价值。

HTTP 1.0 vs 1.1

Proxy support and the Host field:

  • HTTP 1.1 has a required Host header by spec.

  • HTTP 1.0 does not officially require a Host header, but it doesn’t hurt to add one, and many applications (proxies) expect to see the Host header regardless of the protocol version.

Example:

GET / HTTP/1.1 
Host: www.blahblahblahblah.com 
This header is useful because it allows you to route a message through proxy servers, and also because your web server can distinguish between different sites on the same server.

So this means if you have blahblahlbah.com and helohelohelo.com both pointing to the same IP. Your web server can use the Host field to distinguish which site the client machine wants.

Persistent connections:

  • HTTP 1.1 also allows you to have persistent connections which means that you can have more than one request/response on the same HTTP connection.

  • In HTTP 1.0 you had to open a new connection for each request/response pair. And after each response the connection would be closed. This lead to some big efficiency problems because of TCP Slow Start.

OPTIONS method:

  • HTTP/1.1 introduces the OPTIONS method. An HTTP client can use this method to determine the abilities of the HTTP server. It’s mostly used for Cross Origin Resource Sharing in web applications.

Caching:

  • HTTP 1.0 had support for caching via the header: If-Modified-Since.

  • HTTP 1.1 expands on the caching support a lot by using something called ‘entity tag’. If 2 resources are the same, then they will have the same entity tags.

  • HTTP 1.1 also adds the If-Unmodified-Since, If-Match, If-None-Match conditional headers.

  • There are also further additions relating to caching like the Cache-Control header.

100 Continue status:

  • There is a new return code in HTTP/1.1 100 Continue. This is to prevent a client from sending a large request when that client is not even sure if the server can process the request, or is authorized to process the request. In this case the client sends only the headers, and the server will tell the client 100 Continue, go ahead with the body.

Much more:

  • Digest authentication and proxy authentication
  • Extra new status codes
  • Chunked transfer encoding
  • Connection header
  • Enhanced compression support
  • Much much more.

What are the key differences to HTTP/1.x?

At a high level, HTTP/2:

  • is binary, instead of textual
  • is fully multiplexed, instead of ordered and blocking
  • can therefore use one connection for parallelism
  • uses header compression to reduce overhead
  • allows servers to “push” responses proactively into client caches

Why is HTTP/2 binary?

Binary protocols are more efficient to parse, more compact “on the wire”, and most importantly, they are much less error-prone, compared to textual protocols like HTTP/1.x, because they often have a number of affordances to “help” with things like whitespace handling, capitalization, line endings, blank lines and so on.

For example, HTTP/1.1 defines four different ways to parse a message; in HTTP/2, there’s just one code path.

It’s true that HTTP/2 isn’t usable through telnet, but we already have some tool support, such as a Wireshark plugin.

Why is HTTP/2 multiplexed?

HTTP/1.x has a problem called “head-of-line blocking,” where effectively only one request can be outstanding on a connection at a time.

HTTP/1.1 tried to fix this with pipelining, but it didn’t completely address the problem (a large or slow response can still block others behind it). Additionally, pipelining has been found very difficult to deploy, because many intermediaries and servers don’t process it correctly.

This forces clients to use a number of heuristics (often guessing) to determine what requests to put on which connection to the origin when; since it’s common for a page to load 10 times (or more) the number of available connections, this can severely impact performance, often resulting in a “waterfall” of blocked requests.

Multiplexing addresses these problems by allowing multiple request and response messages to be in flight at the same time; it’s even possible to intermingle parts of one message with another on the wire.

This, in turn, allows a client to use just one connection per origin to load a page.

Why just one TCP connection?

With HTTP/1, browsers open between four and eight connections per origin. Since many sites use multiple origins, this could mean that a single page load opens more than thirty connections.

One application opening so many connections simultaneously breaks a lot of the assumptions that TCP was built upon; since each connection will start a flood of data in the response, there’s a real risk that buffers in the intervening network will overflow, causing a congestion event and retransmits.

Additionally, using so many connections unfairly monopolizes network resources, “stealing” them from other, better-behaved applications (e.g., VoIP).

What’s the benefit of Server Push?

When a browser requests a page, the server sends the html in the response, and then needs to wait for the browser to parse the HTML and issue requests for all of the embedded assets before it can start sending the javascript, images and CSS.

Server Push potentially allows the server to avoid this round trip of delay by “pushing” the responses it thinks the client will need into its cache.

However, Pushing responses is not “magical” – if used incorrectly, it can harm performance. Correct use of Server Push is an ongoing area of experimentation and research.

Why do we need header compression?

Patrick McManus from Mozilla showed this vividly by calculating the effect of headers for an average page load.

If you assume that a page has about 80 assets (which is conservative in today’s Web), and each request has 1400 bytes of headers (again, not uncommon, thanks to Cookies, Referer, etc.), it takes at least 7-8 round trips to get the headers out “on the wire.” That’s not counting response time - that’s just to get them out of the client.

This is because of TCP’s Slow Start mechanism, which paces packets out on new connections based on how many packets have been acknowledged – effectively limiting the number of packets that can be sent for the first few round trips.

In comparison, even mild compression on headers allows those requests to get onto the wire within one roundtrip – perhaps even one packet.

This overhead is considerable, especially when you consider the impact upon mobile clients, which typically see round-trip latency of several hundred milliseconds, even under good conditions.

Why HPACK?

SPDY/2 proposed using a single GZIP context in each direction for header compression, which was simple to implement as well as efficient.

Since then, a major attack has been documented against the use of stream compression (like GZIP) inside of encryption; CRIME.

With CRIME, it’s possible for an attacker who has the ability to inject data into the encrypted stream to “probe” the plaintext and recover it. Since this is the Web, JavaScript makes this possible, and there were demonstrations of recovery of cookies and authentication tokens using CRIME for TLS-protected HTTP resources.

As a result, we could not use GZIP compression. Finding no other algorithms that were suitable for this use case as well as safe to use, we created a new, header-specific compression scheme that operates at a coarse granularity; since HTTP headers often don’t change between messages, this still gives reasonable compression efficiency, and is much safer.



以上是关于HTTP/1.0 vs HTTP/1.1 vs HTTP/2的主要内容,如果未能解决你的问题,请参考以下文章

HTTP 1.0 与 1.1

HTTP/1.1 vs HTTP/2区别是什么

HTTP 2 vs HTTP 1.1

进程名,节点名!跨 Erlang 节点的消息 VS rpc:call/4 VS HTTP/1.1

HTTP/0.9HTTP/1.0HTTP/1.1HTTP/2HTTP/3各版本之间的区别?

HTTP/1.1与HTTP/1.0的区别