Online video streaming has become widespread in the past two decades, and much of this growth can be attributed to the HLS video protocol. This protocol changed the video streaming game by helping overcome the industry’s significant compatibility challenges in the early 2000s.
Today, HLS remains one of the most popular protocols for delivering content to video players. It’s also used for encoder ingest for many streaming setups.
This article will cover everything you need to know about the HTTP live streaming protocol. We’ll discuss exactly how HLS works, its adaptive bitrate process, and more.
What is HTTP live streaming (HLS)?
HTTP live streaming (HLS) is an adaptive HTTP-based protocol designed to carry video signals over the internet.
Apple created and released HLS in 2009 to solve the problems of efficient live video and VOD delivery to viewers’ devices, especially Apple devices. One of the main focus points when creating this protocol was the scaling problem that plagued protocols such as RTMP and RTSP (used for Flash).
Over the last few years, the HLS protocol has become one of the most popular protocols for both compatibility and quality of experience. It is widely supported on major devices, browsers, and platforms.
HLS is widely supported for delivery to video players:
- iOS
- Android
- Google Chrome
- Safari
- Microsoft Edge
- Linux
- Smart TV platforms
Some streaming setups also use HLS as the end-to-end protocol, meaning they use it to ingest content during encoding. However, this requires using an HLS-compatible encoder. Fortunately, more encoders are becoming compatible with this technology.
When to use HLS?
HLS should be used whenever the quality of the viewer’s experience is the first priority. The spec is widely adopted across devices, browsers, and platforms, and its compatibility makes it a go-to for most of the industry.
Of course, if you want to use HLS, you must ensure it’s compatible with all of the relevant components of your streaming setup.
How does HLS work?
On a high level, HLS is pretty simple: a video stream is split into small media segments, meaning that small files are being made with a specific length instead of sending out a continuous file. These diles are packaged and shipped in fragmented MP4 (fMP4) containers.
The maximum length of such a segment in HLS is called the target duration. A player would then download these segments one after the other and play them in order within a playlist.
New segments are added at the end of the playlist file for live streams. When a player updates the playlist file, it will see the new segments listed and can download and play them. The protocol dictates that it should be reloaded at every target duration.
It’s also worth noting that HLS removed the need for an open connection and instead uses an HTTP server and cacheable files. Plus, HLS achieves scalability through a Content Delivery Network (CDN) and ordinary web servers. These design characteristics were strategically built to overcome the significant scaling issues of RTMP.
Adaptive bitrate (ABR) switching with HLS
HLS supports an adaptive bitrate (ABR) process called ABR switching. This allows for a quality stream based on the viewers’ internet connection and the ability to switch bitrates anytime.
To allow for adaptive bitrate switching, HLS Playlists are grouped in a Master Playlist that can link to different streams. This allows a player to choose the stream with the bitrate and resolution best suited for its network and device.
Having multiple streams prevents interruptions and buffering during the stream if there is a need to adjust for the bandwidth, adapting to each viewer’s circumstance.
Check out Apple’s General Authoring Requirements to learn more about the suggested targets for encoding and HLS streams.
What causes latency with HLS?
The latency caused by HLS is related to the target duration. For a streaming server to list a new segment in a playlist, this segment must be created first. As such, the server must buffer a segment of “target duration” length before publishing it.
In the worst-case scenario, the first frame a player can download is already “target duration” seconds old.
Segments usually last two to six seconds. Like most streaming protocols, HLS requires a healthy buffer of three or four segments. This buffer is designed to create a robust stream in the event of network or server issues.
The reason is that segments need to be encoded, packaged, listed in the playlist, downloaded, and added to the player’s buffer. This often results in 10 to 30-second latencies before considering encoding, first mile, distribution, and network delays.
Final thoughts
Although it’s incapable of the low latency of WebRTC and other popular protocols, HLS remains a valuable protocol in the streaming industry for use cases where quality, scalability, and compatibility are the primary focus.
If you need help determining which technologies are most appropriate for your streaming needs, reach out to us, and an expert from our team will help you out.