The Internet and the development of interconnected networks are very fluid and require a literal task force called the IETF (Internet Engineering Task Force).
The IETF is always a work-in-progress that is responsible for publishing RFCs (Request For Comment) to implement standards and improve them as time moves along. The IETF is responsible for the continuous development of standards for The Internet Protocol Stack, aka TCP/IP stack, similar to the OSI model (Open Systems Interconnection model), which is maintained by ISO (International Organization for Standardization).
What is Technical SEO?
Technical SEO can be thought of as the engine that powers your website, and technical projects and tasks have either a direct or indirect impact on search engine crawling and indexing. This includes configurations like page elements such as H1 tags, HTTP header responses, XML Sitemaps, robots.txt file, redirects, and metadata, to name a few.
These are things you should take into account when optimizing websites for search engines. Technical SEO isn’t about researching and choosing keywords, analyzing backlink profiles, or content writing, but it does have specific foundational elements that link all areas of search engine optimization together.
Web pages and how they are presented in the SERPs are both artistic and scientific, and the juggernaut component is a psychological effect. Technical SEO is a necessary component of any website. Technical configurations are planned and installed into SEO strategy.
These configurations work directly and indirectly to impact search engine crawling, indexing, and ranking, ultimately leading to visibility on SERPs for your products and services and converting website visitors into leads and leads into revenue.
What is HTTP?
The topic of HTTP is wide and deep.
Having stated this at the beginning of this writing, to keep this as simple as possible and stay on track, other articles will be written and published for the thousands of topics HTTP relates to.
Hypertext Transfer Protocol is on the Application Layer
Hypertext Transfer Protocol (HTTP) is an application-layer protocol for transmitting hypermedia documents, such as HTML. It was designed for communication between web browsers and web servers, but it can also be used for other purposes.
HTTP follows a classical client-server model, with a client opening a connection to make a request, then waiting until it receives a response. HTTP is a stateless protocol, meaning that the server does not keep any data (state) between two requests.
mozilla.org
Hypertext Transfer Protocol connects through port 80 at the application layer and has alternate connections for TCP (Transmission Control Protocol) at port 8080, which need to be specified because 80 is the default. For simplicity’s sake, there are 65,536 available port numbers for applications.
There are 1024 well-known ports, of which HTTP’s port 80 is considered a well-known port for web browser connections. HTTP is a stateless protocol and typically the server doesn’t main past client request information.
What’s the difference between TCP/IP stack and OSI model?
TL;DR – layers at which data is gathered and moved, resulting in five layers in TCP/IP stack and seven layers in OSI model. They are nearly the same at the beginning and end result.
In 1989, RFC 1122 stated the TCP/IP stack had four layers and did not include the Physical layer, number 0 in the layering system. Long story short; these organizations collaborated and decided the best solution was to have five layers: IETF, Cal Berkley, Xerox, IBM, and ARPANET.
OSI
- Application Layer – 7
- Presentation Layer – 6
- Session Layer – 5
- Transport Layer – 4
- Network layer – 3
- Data Link Layer – 2
- Physical Layer – 1
TCP/IP
- Application Layer – 4
- Transport Layer – 3
- Internet Layer – 2
- Link Layer – 1
- Physical Layer/Media – 0
HTTP Persistence
What is HTTP persistent connection?
You may have heard the term “persistent” thrown around, but what does it mean for your mobile device web browser? A persistent connection can be defined as keeping something available until you need it again.
In this case, a persistent web connection, or HTTP persistent connection, means that whenever you open a browser, all requests will stay connected even if there was an error during the previous session and needed to be fixed before connecting back later.
Some transport options
If you’re already using a persistent connection, you should investigate your options. The most cost-effective solution is to use a WebSocket transport (like WebSockets or MQTT).
There are several advantages to using a WebSocket connection – for example, the connection requires a smaller footprint. It is also easy to start using a WebSocket connection but can require additional infrastructure resources.
MQTT is a low-latency communication protocol that uses web sockets to implement persistent connections. Unlike WebSockets, it doesn’t require additional infrastructure like cookies persistence.
It requires less memory, but it offers more reliable message delivery. This connection can be either a non-persistent or persistent one, which makes it ideal for different situations.
Let’s back up because we are getting ahead of ourselves.
How are HTTP 1.0 connections handled?
The server always closes connections and to fix this in HTTP 1.0, a “keep-alive” was added as a fix to the limitation.
Connection: keep-alive
If keep-alive is used, the server will recognize this and tell the client the communication finished and close the door, so to speak, with “close”.
Connection: close
How are HTTP 1.1 connections handled?
All connections are open and persistent unless specified by the client or server. Multiple requests may use a single connection, but there is a time-out problem, especially with long text content (many thousands of words) and audio and video streams.
Bit-chunked transfer-coding was implemented to further develop how to open HTTP1.1 connections remained open and address an obvious deficiency in the newest version. This is for when connections have to stay open for too long or a lot of resources are being utilized for transfer.
There is an identifier called a “last-bit” to communicate to the client that the response is finished and to expect a new response that is a continuation of the previous response.
last-chunk
Why is this important?
HTTP 1.1 introduced chunked transfer-coding, a last-chunk bit that reduces latency in subsequent requests and enables HTTP pipelining of both the request and response. Reduced network congestion can also be attributed to this feature because fewer TCP connections are needed when using it; errors aren’t penalized like they would have been before with older versions of protocol support for data streaming over WiFI networks using TLS (Transport Layer Security), which is similar to SSL (Secure Sockets Layer).
How valuable is HTTP pipelining?
HTTP Pipelining is an integral part of HTTP connections. However, there are some problems with it that developers don’t always think about: WebSocket and other application-level transports are not managed by the server.
To avoid long-term persistence problems, developers should implement HTTP pipelining and other forms of a non-persistent connection. The benefit is saving bandwidth when sending and receiving data.
You should only use a persistent connection if your application can use fast web sockets. Otherwise, you will be using a long-term, expensive WebSocket connection.
WebSocket connections cost so much because they are used often – every time the client receives a message. Applications can use HTTP pipelining technology to send data to the server in the fast-web-socket mode to get around this problem.
This allows your application to send out messages as soon as they come in instead of waiting for the message to be received by the server. This is efficient and outstanding for performance in your application because the server knows exactly what messages it has to send out and how to send them out on time.
How are HTTP/2 connections handled?
TL;DR – HTTP pipelines that were integrated into HTTP1.1 and improved by serving content before it’s requested by the browser. HTTP/2 improves on how and when data is framed, delivered, and requested, as well as a huge focus on security.
The last-bit chucking is not supported as it was in HTTP1.1 because it’s not needed anymore for data streaming due to built-in capability request multiplexing and avoiding the head-of-blocking issue. The rise of HTTP/2 was in 2015 and as fast as it came, it will be replaced by HTTP/3 and possibly QUIC. Google, Microsoft, Facebook, and lesser-known networking think-tanks are at the forefront of improving HTTP/2, taking everything positive about it and installing it into HTTP/3.
The somewhat secretive IESG (Internet Engineering Steering Group) now controls the future of HTTP/3 and beyond for the purpose of new transport technology and the IoT (Internet of Things); connecting everything on the planet together and preparing for non-human intelligence to safeguard, make changes, and control how data is transported.
HTTP Compression
When you send data over the internet, your browser makes sure to compress it before sending it out so that compliant browsers can announce what compression schemes, which are maintained by IANA (Internet Assigned Numbers Authority) oversees and maintains IP addressing DNS and other critical topics related to where data comes from and where it does) are supported on their end.
However, if a browser doesn’t have those HTTP compression schemes installed or supports ones not listed, they use a tremendous overhead of bandwidth and lack of speed that’s completely unnecessary. Faster and smaller is always better when transferring data.
There are many types available, including gzip (which is most common), but there’s also Brotli (this is our preference).
The browser displays how it supports compression, and the server will reply with an HTTP response for how content was encoded or how the data transfer is encoded.
Client gzip acceptance example:
Content-Encoding: gzip, deflate
Client Brotli acceptance example:
Accept-Encoding: deflate, br
Server HTTP response content-encoding example:
Content-Encoding: br
Server HTTP response transfer-encoding example:
Transfer-Encoding: chunked
Transfer-encoding is quite interesting as it allows multi-node connections to address compression instead of compressing the resource. This means with content-delivery networks and data streams coming and going to and from different servers to provide data for assembly at the browser, the data resources themselves are not encoded, but the connection between the two nodes is addressed with a transfer-encoding response.
A great example of how this works and the result of using chunked transfer-encoding is binge-watching Netflix or Hulu all day and night and not experiencing an interruption in streaming.
HTTP Secure
As mentioned at the beginning of this article, HTTP connects through port 80. HTTPS (Hypertext Transfer Protocol Secure) connects at port 443. It is important to request and server content that is secure through port 443.
When you crawl a website and see the error for Mixed Content, that means a resource such as a stylesheet, PDF, or script, image, or videos loaded by the browser from an insecure connection, not HTTPS.
Mixed Content from an insecure connection is bad for SEO.
This affects SEO and the performance of the website in search results in a big way. Search engines, especially Google do not want to serve insecure content because it leads to a bad user experience and a potential problem for the IP (Internet Protocol) address and AS (Autonomous System).
If you claim to be an experienced SEO and you have to open another browser to look up “what is a mixed content error”, you should step away from the keyboard and don’t touch a website ever again, seriously, Без жартів!
Google’s QUIC Transport Protocol is Where We are Going with HTTP/3
QUIC stands for Quick UDP Internet Connections, a protocol that provides several security features to ensure that the traffic between the Internet and the servers is encrypted. This means that the information exchanged is relatively safe from eavesdroppers and is the primary reason why QUIC is being considered the next generation of the HTTP protocol.
Interestingly, Google QUIC is currently being considered the next generation of the HTTP/3 protocol. There are many reasons for that, but the most significant among them is that reducing the number of round-trips to establish the connection replaces the TLS record layer with proprietary framing and requires far less substantial changes for software application programmers.
The QUIC handshake is the critical component in the protocol. The QUIC handshake is also known as QUIC Transport Layer Security (QS) handshake. In this handshake, the client sends the “Request-Router-Key” packet.
This packet contains a private key that is used to decrypt the server’s public key, and the server sends the “Response-Router-Key” packet, which is used to encrypt the client’s key.
HTTP Request Methods
The HTTP request method determines the type of data that will be sent to a server and can either be determined by using an algorithm or with some form of authentication. There are advantages and disadvantages to both methods, it depends on preference.
GET
Headers are the best way to get information about your target resource without transferring its entire representation. This is useful when using GET requests because you can request just summary metadata in one go instead of requesting data and then downloading it all at once, which might take up more bandwidth than necessary.
GET requests that the target resource transfer only data. It should not have any other effect on how servers are configured, unlike certain older methods like PUT or DELETE which could potentially delete files from your computer if you don’t deal with them properly through guidelines published by W3C.
HEAD
HEAD requests transfer RESTful state-wide representations like a GET request but with no enclosed response body; also called “head” ing out.
POST
POST HTTP requests allow you to send data from the client-side and receive a response.
PUT
PUT is a request method that enables you to add new data on the fly without refreshing your browser window, so long as it has been requested by someone with access rights (such as administrators); may require a new URI.
DELETE
DELETE HTTP request method signals the end of a conversation. This means that if you are using this, then there is no need for further interaction with whatever system or website it’s being sent on behalf of, you’re finished and the specified resource is gone.
TRACE
TRACE is a method of HTTP that allows you to trace the request and response from beginning to end. This feature can help troubleshoot problems with latency or other issues and analyze how different actions affect page load time in real-time.
OPTIONS
OPTIONS enables you to specify what operation should be performed on the resource identified by its URL.
CONNECT
CONNECT is a quick way to check if your website has been up and running. The server will return the status code, as well as information about how long ago it was last accessed or what pages were loaded by this browser session via transparent TCP/IP tunnel.
PATCH
PATCH is a method to change parts of an existing web page or website without actually deleting anything.
HTTP Header Fields
This section about HTTP header fields is a massively enormous part of HTTP and technical SEO.