Overview taken from HERE
- Reduced preception of latency
- allows for multiple concurrent exchanges between client/server
- prioritization of requests
just extending HTTP/1 not replacing anything
- data on HTTP/2 connection is sent over in a series of "streams".
- Each stream can contain n number of frames.
- These "frames" are basically binary encoded parts of the "message" (response or request)
- The streams transfer the frames over the connection in any given order.
- They are then reassembled on the other end of the connection.
- this means that we no longer have to send one request and wait for a response over that connection.
- The connection can send multiple streams of data back and forth from client to server.
- http/2 allows to have just one connection per origin.
- this reduces the latency associated with client/server conections
- connections can be long lived and communicate multiple frames/streams of data over one connection
- allows the server to send multiple resources over one request/response cycle
- basically instead of the traditional one-to-one request/response we now can have one-to-many
- example is when a client requests an html page, the server can just push the .css and .js resources along with the .html in one single response