In the previous blog post, we’ve covered how to best bring your applications to Samsung Tizen and touched on the two approaches we have in delivering audio and video content to Samsung Tizen. In this blog, we will discuss how to leverage Samsung Tizen’s native playback component, AVPlay. We will also list down its known use-case limitations.
Native Approach: AVPlay, Tizen’s native video player
As with most smart TV systems, Tizen comes with its own native video player, providing support for the most common streaming protocols. Tizen’s native video player is called AVPlay. Using AVPlay isn’t all too difficult and simply requires loading the AVPlay API library. This is as simple as loading the following script:
<script type= "text/javascript" src= "$WEBAPIS/webapis/webapis.js"></script>
With this library loaded, you can enable playback easily with AVPlay API by:
1. Defining the playback window and indicating AVPlay is to be used,
<strong>var objElem = document.createElement('object');</strong>
<strong>objElem.type = 'application/avplayer';</strong>
<strong>document.body.appendChild(objElem);</strong>
2. Configuring a media source,
<strong>webapis</strong><strong>.avplay.open('https://content.domain.com/channel1.m3u8');</strong>
3. Loading the media source to prepare for playback,
<strong>webapis</strong><strong>.avplay.prepare();</strong>
4. and trigger playback.
<strong>webapis</strong><strong>.avplay.play();</strong>
As you can see, getting playback up and running is very simple. Beyond this, AVPlay brings some additional capabilities such as:
- Support for streaming protocols such as Smooth Streaming, MPEG-DASH and HTTP Live Streaming (HLS),
- Allowing to perform basic trick play by seeking or changing the playback speed,
- The ability to receive notifications in case of buffering or errors,
- Support for alternative audio channels,
- Basic DRM playback,
- and support for subtitle formats such as SMPTE-TT, SAMI and DFXP.
AVPlay: Challenges and Use-case Limitations
AVPlay has been around since Tizen’s early days in 2015 (with the launch of Tizen 2.3) and has evolved a lot, but as a result, AVPlay differs significantly in capabilities across different platform versions of Tizen. The main purpose of AVPlay however hasn’t changed: empower media apps to use basic playback capabilities. In practice, when building a full media app the capabilities of AVPlay will soon reach their limitations.
There are two main challenges which OTT video publishers often start to encounter when working with AVPlay:
- Capabilities are limited to basic use cases and offer little flexibility.
- Different versions of AVPlay are tied down to support specific versions of streaming protocols and (greatly) differ in capabilities.
Where the limited basic use cases support is often initially not seen as a huge issue, the problems start growing once new business cases get added. Some notable use cases which are difficult or even impossible to support properly are:
- Monetization through client-side (CSAI) or server-side ad insertion (SSAI).
- Reducing playback latency for optimal user experience in interactive or live broadcasts.
- Advanced bitrate selection (or even manual bitrate selection)
- Identifying streaming issues as error responses are extremely brief.
- Monitoring QoE due to lack of detailed events (no possibility to inspect the size of the buffer, details of different tracks and qualities, …)
- Support for alternative subtitle formats and subtitle styling.
- Improved scrubbing with thumbnails or I-FRAME streams.
On top of these limitations, there are a lot of other fundamental technical limitations across different Tizen versions. These, consequently, make AVPlay to be a less viable option in delivering a reliable viewing experience on Samsung Tizen. In our next blog post, we are going to explore them and list down the varying support levels across Samsung Tizen versions and models.
HTTP Live Streaming
First, let’s try to repeat the basis of HTTP Live Streaming (HLS) in one paragraph. It is fairly simple: a video stream is split up in small video segments, which are listed in a playlist file. A player is responsible to load the playlist, load the relevant segments, and repeat until the video is fully played. To allow for dynamic quality selection and avoid buffering, different playlists can be listed in a “master playlist”. This allows players to identify the ideal bitrate for the current network, and switch from one playlist to another.
For more details on how HLS works, check out our blog post here.
How Does LL-HLS Work?
Apple recently extended HLS in order to enable it for lower latency. When HLS was developed in 2009, scaling the streaming solution received the highest priority, which caused latency to be sacrificed. As latency has gained importance over the last few years, HLS was extended to LL-HLS. As Rome wasn’t built in a day, the specification wasn’t either. It took some weird twists and turns (you can read all about it in our previous blog post), but it was finalized 30th of April with an official update of the HLS specification.
Low Latency HLS, aims to provide the same scalability as HLS, but achieve it with a latency of 2-8s (compared to 24-30 second latencies with traditional solutions). In essence, the changes to the protocol are rather simple.
- A segment is divided into “parts” which makes them something like “mini segments”. Each part is listed separately in the playlist. In contrast with segments, listed parts can disappear while the segments containing the same data remain available for a longer time.
- A playlist can contain “Preload Hints” to allow a player to anticipate what data must be downloaded, which reduces overhead.
- Servers get the option to provide unique identifiers for every version of a playlist, easing cacheability of the playlist and avoiding stale information in caching servers.
These three changes are at the core of the LL-HLS specification, so let’s go into each of these in some more detail (note: there are some other changes such as “delta playlists,” these are less crucial for the correct working of LL-HLS, but rather tend to optimize things).
HLS parts are in practice just “smaller segments”. They are noted with an “EXT-X-PART”-tag inside the playlist. As having a lot of segments (or parts) increases the size of the playlist significantly, parts are only to be listed close to the live edge. Parts also have no requirement to start with an IDR frame, meaning it is OK to have parts which cannot be played individually. This allows servers to publish media information while the segment is being generated, allowing players to fill up their buffers more efficiently. As a result, buffers on the player side can be significantly smaller compared to normal HLS, which in practice results in a reduced latency.

Parts can be addressed with unique URIs, or with byte-ranges. The use of byte-ranges in the spec allows for sending the segment as one file, out of which the parts can be separately addressed and requested.
Another new tag, called “EXT-X-PRELOAD-HINT”, provides an indication to the player which media data will be required to continue playback. It allows players to anticipate, and fire a request to the URI listed in the preload hint to get faster access. Servers should block requests for known preload hint data, and return it as soon as available. While this approach is not ideal from a server perspective, it makes things a lot simpler on the player side of things. Servers can still disable the blocking requests, but it is definitely recommended to support this capability. All of this sums up to a faster delivery of media data to players, reducing latency.
The third important update to the HLS specification is the ability to provide a unique naming sequence to media playlists through query parameters passed in the playlist URI. Using simple parameters, a playlist can be requested containing a specific segment or part. Players are recommended to request every playlist using these unique URIs. As a result, a server can quickly see what the player needs and provide an updated playlist if available. This capability is extended in such a way that the server can enable blocking requests on these playlists. Through this mechanism a server becomes empowered to provide the updated playlist as soon as it becomes available. Players detecting this capability can switch strategies to identify which media data will be needed in the future, decreasing the need for large buffers and… reducing additional latency.
As the protocol extension is backwards compatible with HLS, players which are not aware of LL-HLS capabilities will be able to still play the stream at a higher latency. This is important for devices which rely on old versions of player software (such as smart TVs and other connected devices) without the need of setting up a separate stream for devices capable of playing in low latency mode.
LL-HLS vs. LL-DASH
In contrast to LL-HLS, Low Latency DASH (LL-DASH), does not have the notion of parts. It does however have a notion of “chunks” or “fragments”. In LL-DASH, segments are split into smaller chunks (often containing a handful of frames), which are then delivered using HTTP chunked transfer encoding (CTE). This means the origin doesn’t have to wait until the segment is completely encoded before the first chunk can be sent to the player. There are some differences with the approach of LL-HLS, but in practice, it is quite similar.
A first difference is that LL-HLS parts are individually addressable, either as tiny files or as byte ranges in the complete segment. LL-DASH has an advantage here as it does not depend on a manifest update before the player can make sense of a new chunk. LL-HLS however allows for the flexibility to provide additional data on parts, such as marking where IDR-frames can be found and decoding can be started.

Interestingly, it is possible to reuse the same segments (and thus chunks and parts) for both LL-HLS and MPEG-DASH. In order to achieve this, segments should be created with large enough chunks/parts to support LL-HLS. By addressing the LL-HLS parts as byte-ranges inside a larger segment, the segment can be listed both in a LL-DASH manifest and an LL-HLS playlist. This way, each LL-HLS part can be provided to the client once it is available on the origin. Similarly, the origin for DASH, can provide the same data as a chunk to the player over CTE. While both a MPEG-DASH manifest and an HLS playlist would need to be generated, it allows duplicate storage of the media stream, which can lead to large cost savings.
When we look at compatibility, LL-HLS and LL-DASH are mostly supported on the same platforms, but there are some important nuances. In the case of iOS, we don’t expect LL-DASH to be used in production due to the restrictions on the App Store. The trickiest platforms to support are predicted to be the SmartTVs and some connected devices. On connected devices, especially older devices, if you are restricted to native/limited players, you could face problems with support for that low latency use case as older versions of the software don’t support it and software updates on those platforms are rare.

What to Expect with LL-HLS Moving Forward
The LL-HLS preliminary specification has been merged with the HLS standard, which means that it’s safe for vendors to start investing into LL-HLS adoption. We anticipate other vendors will aim to start supporting LL-HLS in production in September, as it’s likely that Apple will announce LL-HLS will be a part of iOS14, a part of their new iPhone releases. During Apple’s WWDC 2020, which kicked off on June 22nd, Roger Pantos hosted a session on LL-HLS, which you can watch here. In the session, Pantos announced that LL-HLS is coming out of beta and will be available for everyone in iOS 14, tvOS 14, watchOS 7, and macOS later this year, most likely during the iPhone Event in September. You can also read about the updates from a previous blog. At THEO, we already have beta players up and running. If you’re looking to get started on implementing, you can talk to our Low Latency experts.