What is HTTP and HTTPS?
HTTP is the abbreviation of Hypertext Transfer Protocol. Its name tells us a lot.
Think about signing your rental agreement as a tenant.
First of all, signing it means at least two parties involved — you and your landlord. In the HTTP context, two sides are a client and a server.
Secondly, you discuss rights and responsibilities in the contract. When something goes wrong, both sides know who is responsible for what. Similar rules and error handlings are listed in a series of standard documents for HTTP — the RFC docs.
The client and the server negotiate quite a few things in their headers. For example,
- In the
Accept-Charset, they discuss the character encoding options.
- In the
Accept-Encoding, they pick which compression algorithm should be used.
- In the
Cache-Control, they decide the cache strategy for resources.
As a protocol, HTTP defines rules of how computers communicate and handle errors between two parties, a client and a server.
Your car or public transportation transfer you from your home to office in the morning and take you back at night.
Therefore, when talking about a transfer, we refer to a two-way transfer.
Sometimes, you need to drop by a grocery store and a post office before home. You can add multiple points before your destination.
It is the same in our network. There are many points between the client and the server, helping us visit a webpage efficiently. A well-known one is proxies.
As a transfer protocol, HTTP is for transferring data between two parties in and it is a two-way transfer.
HTTP transfers text, not binaries. It is the text that you can read.
Nowadays, the data is beyond the text, including images, videos, audios, etc.
HTML is one of the hypertexts. A browser parses HTML and displays the “text” or multimedia on our screen.
As a Hypertext Transfer Protocol, HTTP is for transferring hypertext data (e.g., text, images, videos) between two points. It is a two-way transfer.
That’s the HTTP.
TCP/IP model and OSI model
Before talking about HTTPS, let’s zoom out and look at where the HTTP “locates” in the system.
Often, we heard about the 7-layer OSI model and 4-layer TCP/IP model.
What are the differences?
TCP/IP model was invented in the 1970s. Then the team behinds OSI notices the models are quite different from projects to projects.
The team comes up with an idea: why not unify all models and suggest the industry’s best design?
However, so many projects have been using the TCP/IP model for so long. Changing the design concept is next to impossible. Therefore, the team offers the OSI model as a design reference, not a standard.
Let’s take a look at both models. The matches between them are not precisely accurate, but you get the idea.
A distinction you may notice is that OSI introduces a physical layer to include the physical world.
Another one is the definition of the application layer. Most real-life projects don’t make sense to separate the application layer into three layers, creating more problems than it solves.
Though TCP/IP model is in use, people refer to the layer’s number in the OSI model.
For example, when someone is talking about the layer-4 load balancing, he/she is referring to the TCP load balancing.
When we send data, the data flows from top to bottom through the model.
Each layer attaches some additional headers to the data.
The attached headers and footers are removed on each layer from bottom to top at the receiving end.
At the application layer, the browser takes the hypertext and displays a beautiful webpage.
What does the security HTTPS offer?
We know the “S” at the end of the name stands for “Secure.”
To achieve security, developers add an additional layer under the HTTP — the TLS layer.
When data flows throw the additional layer, it is encrypted by Transport Layer Security (TLS). The HTTP over TCP/IP becomes the HTTP over TLS.
To understand what security the TLS layer offers, we need to define the security.
Security or secure communication over a computer network needs 4 features:
- Authentication — It is for proving identities. The data can only be sent to a trusted party.
- Non-repudiation — It means an authentication that can be said to be genuine with high confidence.
- Secrecy — Only the authenticated parties can get the data.
- Integrity — The data is kept as is during the entire process.
When a browser connects to a server with HTTPS, it sends a list of encryption algorithms (aka cipher suites) for future communication.
Next, the server picks the preferred cipher suite and returns it back to the browser. At this moment, both parties reach an agreement on the encryption method.
A typical cipher suite looks like
Though complicated at first sight, it is easy to understand by dividing them into 4 groups.
- A key exchange algorithm, such as ECDHE.
- A digital signature algorithm, such as RSA, for the authentication and non-repudiation features
- A symmetric-key encryption algorithm, such as AES128-GCM, guarantees the secrecy feature
- A digest algorithm, such as SHA-256, adds the integrity feature
Secrecy — encryption and decryption
The secrecy is all about encryption and decryption.
We can use a key to encrypt the plain text to ciphertext. This process is encryption.
At the other end, the decryption begins. With a key, the ciphertext can be decrypted to the original plain text.
The entire process of encryption and decryption is an encryption algorithm.
The key plays a critical part in the process. But what is it?
A key is merely a string of random numbers.
There are two kinds of encryptions:
- symmetric-key encryption, and
- asymmetric-key encryption.
In a symmetric-key algorithm, we use the same key to encrypt and decrypt.
Initially, TLS offers quite a few symmetric encryption algorithms. When time passes by, most of them are considered insecure and deprecated.
Among them, two are survived.
The most popular one is Advanced Encryption Standard (AES). Its length can be 128 bits, 192 bits, or 256 bits, offering various security levels.
Another one is ChaCha20 from Google, with a fixed length of 256 bits. It becomes less popular since engineers optimize the AES algorithm on hardware.
Usually, an operation mode comes with symmetric encryption algorithms. GCM, CCM and Poly1305 are the most widely used ones.
- It is an AES algorithm
- The key’s length is 128
- The mode is GCM.
A symmetric-key algorithm is usually run quite fast. As long as both sides keep the key safe, the communication is considered secure. Any person can’t decode the original plain text from a cipher one without the key.
But how we exchange the key safely? Another person could intercept the key easily if you send it through the internet.
Smart engineers and mathematicians found a solution — the asymmetric-key encryption.
From its name, you can tell the algorithm uses different keys for encryption and decryption.
The public key is shared with everyone, while the private key should be kept secret.
The asymmetric-key encryption has a feature:
- A ciphertext encrypted by the public key can only be decrypted by the paired private key.
- It works the other way, too. If you encrypt the plain text with a private key, the other party can only decrypt the ciphertext with the paired public key.
The recommended one is Elliptic-curve Diffie–Hellman (ECDHE) algorithm, and it is for the key exchange.
The asymmetric-key algorithm doesn’t offer security without a cost — the process is two orders of magnitude slower than the symmetric encryption algorithm.
Can we find a balance and benefit from both kinds of algorithms?
Hybrid encryption balances costs and benefits
At the first step, we can exchange the symmetric-key (for example, a session key) with the asymmetric encryption algorithm.
- We encrypt the session key with the public key.
- The other party decrypts it with the private key and receives the session key.
The session key usually is short. Therefore, the encryption time of the asymmetric encryption algorithm is still acceptable.
If a hacker intercepts the communication, he cannot decodes the session key without the paired private key.
Integrity — digest algorithm
The digest algorithm is a hash function. It guarantees the digest is entirely equivalent to the original plain text.
MD5 (Message-Digest 5) and SHA-1 used to be popular and are considered not secure anymore.
TLS recommends SHA-2 (Secure Hash Algorithm 2), the name for a series of algorithms, such as SHA224, SHA256, and SHA384.
The number in the algorithm name means the length of the digest measured by bits. For example, SHA256 can generate a 256-bit digest.
Let’s add the digest to our process and see how it works.
- The digest algorithm generates a digest out of the original plain text.
- We don’t want to leak the digest. The session key encrypts both plain text and the digest into the ciphertext correspondingly.
- At the other end, the session key decrypts the ciphertext. The plain text and the digest (Digest A) are received separately.
- With the same digest algorithm, we can generate Digest B out of the plain text.
- Digest A should be the same as Digest B.
What if someone changed your original text.
The digest algorithm is sensitive to change and radically altered if you edit merely a letter in the input text.
When comparing two digests on the receiving end, we can tell if the main text was modified.
Authentication and non-repudiation with digital signature
A digital signature is made of the digest and the private key. It achieves authentication and non-repudiation simultaneously.
Authentication is a process of proving your identity.
When we bring our ID cards to open a bank account, that’s authentication.
The private key is the best element to prove the identity.
Like your bank account password, you should be the only one knowing it. In the digital world, the other party assumes that whoever holds the private key is “you.”
But why digest?
Encrypting the digest with your private key is like signing a document with your name. You cannot deny it in the future. The digital signature becomes undeniable.
Why not the original text?
The digest is shorter. We know asymmetric-key encryption takes time: the more concise text, the better performance.
Besides, a brief digest leads to a smaller digital signature, easy to store and deliver.
The last step of the verification is similar to verifying the integrity — we compare two digests.
All set. After understanding what security HTTPS offers, I guess we now have all needed to stay secure online.
Not so fast…
Public key verification with digital certificate
When talking about secrecy, we exchange a session key with asymmetric-key encryption.
The session key was encrypted by the public key. We assume the public key is a trusted one.
But what if it is a forged one generated by a hacker?
When you encrypt the session key with the forged public key, a hacker can easily decrypt it with its paired private key on hand.
How you know the public key on your hand is the trusted one? Take GitHub, for example. We don’t know if the one behind
github.com is the real GitHub identity.
Certificate Authorities (CA) join the game and solve this trust issue. They are a group of trusted 3rd parties that certifies the ownership of a public key. Popular ones are IdenTrust, DigiCert, and Sectigo.
github.com on Chrome. We can check its certificate, which comes from DigiCert.
On the screenshot, you can see a 3-level certificate chain:
- DigiCert High Assurance EV Root CA
- DigiCert SHA2 High Assurance Server CA
A higher level CA endorses its lower level CA.
We can see the Root CA signs the High Assurance Server CA, which signs the github.com. The root CA is self-signed.
The certificate chain is a representation of the trust chain. With it, we don’t need to worry about the forged public key. A CA proves that it is the GitHub team behind the domain.
What if a CA is hacked? It had actually happened in 2011. A hacker can authenticate phishing websites as trusted ones.
The authorities maintain Certificate Revocation List (CRL) and Online Certificate Status Protocol (OCSP) to prevent hackers from sabotage the trust chain. With the tools, they can revoke compromised certificates.
Another fatal weakness is CA itself. The entire system will be lost if a trusted CA issues fraudulent certificates to individuals. When it happens, root CA has to blacklist the fraudulent CA and revoke all certificates it issued.
- If you are interested in the CA’s story, here is a short read: https://en.wikipedia.org/wiki/DigiNotar.
- More about CAs: https://en.wikipedia.org/wiki/Certificate_authority