Setting up your own low-latency HLS server to stream from any source inputs

This tutorial is about setting up a Low-Latency HLS streaming server using FFmpeg, Apple’s mediastreamsegmenter and NGINX. Apple’s HLS reference tools ([1]) provides instruction on how to run a simple LL-HLS server using tsrecompressor and mediastreamsegmenter. However, this server setup only allows live source streams generated locally from a fixed test pattern. Only up to 4 renditions with fixed bitrates (0.5 Mbps, 2 Mbps, 4 Mbps and 7.5 Mbps) and resolutions are allowed (see Figure 1 below). The tsrecompressor manual mentioned a command line option “-i / — input-file” which generates source streams from a local file. However, tsrecompressor always crashes when this option is used. Though this simple reference server provides a good starting point for LL-HLS evaluation, it cannot be used for more advanced scenarios, for example when developers want to use their own test sources. This tutorial shows how to set up an LL-HLS server that can stream from virtually any live sources, such as webcam, screen capturing, local files or remote sources.

Figure 1: An LL-HLS stream generated using Apple’s reference HLS tools

No proprietary tools are needed for this tutorial. The tools you will need include,

  • Apple’s HLS stream packager, mediastreamsegmenter,
  • Apple’s PHP script for low-latency HLS serving, lowLatencyHLS.php,
  • FFmpeg (with the basic build) is used to generate live source streams,
  • NGINX web server for LL-HLS delivery,
  • PHP-FPM (fastCGI process manager) is needed to run lowLatencyHLS.php,
  • Testing players: Shaka player, Hls.js, Theoplayer and AVPlayer.

This tutorial starts by reviewing how to set up a simple LL-HLS server using Apple’s reference HLS tools. Next, it describes a more complex setup with arbitrary live sources and video encoding profiles.

This tutorial requires high-level understanding of the LL-HLS standard ([2]).

Figure 2: Multi-bitrates LL-HLS streaming from FFmpeg-generated live sources, played with low-latency on Shaka player

Figure 3: LL-HLS master playlist with 4 renditions

Apple’s reference LL-HLS tools

Apple provides the required tools ([1]) for setting up a reference LL-HLS server. You can login to your Apple developer account and download the tools as a tar file. After installation, you can follow the readme file contained in the installation folder to set up the server. Basically, developers can use tsrecompressor to generate live source streams, and send the streams to the HLS packager, mediastreamsegmenter. Tsrecompressor can generate up to 4 live source streams that are time-synchronized. To ingest the source streams, four instances of mediastreamsegmenter need to run, each instance handles one source stream. The live source streams are transported from tsrecompressor to mediastreasegmenter(s) as MPEG Transport Stream (TS) over multicast UDP. Mediastreamsegmenter receives the source streams, repackages them to one of the three formats, TS, CMAF, or FMP4, then outputs LL-HLS chunks/segments and playlists. The LL-HLS output can be hosted on a web server, such as NGINX or Apache server. In addition, since an LL-HLS stream is essentially dynamic web content, the reference server also includes a PHP script, lowLatencyHLS.php to serve the stream chunk by chunk as they are dynamically generated.

Tsrecompressor can only generate up to 4 live source streams from a fixed test pattern (Figure 1). The live source streams are constrained to a set of fixed bitrates and resolutions. The rest of this article describes how to set up a more advanced LL-HLS server that can serve LL-HLS streams from virtually any live sources. This server setup has been tested against 4 LL-HLS players, Shaka player, Hls.js, Theoplayer and AVPlayer.

Setting up the origin streaming server

NGINX ([3]) can be used for delivering LL-HLS streams. However, you can also use other web servers. You will need to enable the following NGINX features as they are needed by LL-HLS delivery,

  • Running PHP script in NGINX for serving streams in low latency mode.
  • HTTP/2: Even though in the latest LL-HLS specification, Apple dropped the requirement for HTTP/2 push, it is still recommended according to Appendix B.1. ([2]).
  • HTTPS: Because of HTTP/2, HTTPS is also needed subsequently as most web browsers require encryption for HTTP/2 traffics.

Depending on your specific setup, you might want to apply other NGINX configurations. For example, the location directory of the server must match the output directory of mediastreamsegmenter, you might also need to configure CORS (cross-origin access) depending on where you run the players.

Setting up PHP-FPM and connecting it to NGINX

To run PHP scripts, you need to install and run PHP-FPM ([4]) first, then configure NGINX to talk to PHP-FPM. PHP-FPM can be run from anywhere reachable from the NGINX server. For simplicity, you can run PHP-FPM from the same machine as NGINX. The following NGINX configuration shall be added,

location ~ \.php$ {

try_files $uri = 404;

fastcgi_pass 127.0.0.1:9000;

fastcgi_index index.php;

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

include fastcgi_params;

}

Above, the localhost IP and port 9000 are used by NGINX to reach PHP-FPM. You can configure PHP-FPM to listen on port 9000 by modifying PHP-FPM’s www.conf. Look for a line like “listen = 127.0.0.1:9000”. Restart PHP-FPM after modifying the configuration. Now NGINX should talk to PHP-FPM, lowLatencyHLS.php can be run by PHP-FPM.

Configuring NGINX location directory

The NGINX location directory (e.g., /tmp/llhls for location “/llhls” in your domain) must match the output directories of mediastreamsegmenter. For example, if mediastreamsegmenter outputs to /tmp/llhls/, you can configure “/tmp/llhls/” to be the home directory for the location “/llhls”.

Configuring HTTP/2 in NGINX

HTTP/1.1 works well for LL-HLS playback using Shaka player, Hls.js and AVPlayer, however Theoplayer requires HTTP/2 which is recommended per the LL-HLS specification. Enabling HTTP/2 in NGINX requires adding only one directive in the configure file, “listen 443 ssl http2;”. However, since TLS (Transport Layer Security) is required by most browsers when HTTP/2 is used, you will also need to configure HTTPS in NGINX.

Configuring HTTPS in NGINX

You need to generate and configure the TLS certificate and private key to enable HTTPS. For example, on Linux, you can use openssl to create self-signed certificates. To add a self-signed certificate to NGINX, you need to add the following directives to the server section in the NGINX configuration file,

ssl_certificate [path_to_your_cert_file];

ssl_certificate_key [path_to_your_key_file];

For web players (Shaka, Hls.js and Theoplayer), self-signed certificate is enough for HTTPS-based streaming. However, for AVPlayer on Apple devices, App Transport Security (ATS) does not trust self-signed certificates by default. There are two workarounds. First, in your AVPlayer application code, catch the “untrusted certificate” error, and add an exception by trusting your self-signed certificate. However, the recommended way by Apple is to build your own Certificate Authority (CA), and create a CA-signed certificate ([5]) for your site. Then, on your Apple devices, add and trust your CA root certificate ([6]), so in the future, any certificate signed by your own CA will be trusted by your AVPlayer application. HTTPS-based streaming should succeed subsequently.

Running mediastreamsegmenter and lowLatencyHLS.php

Mediastreamsegmenter ingests live source streams and uploads LL-HLS segments/chunks and playlists to the web server. For settings where mediastreamsegmenter and the web server run on the same machine, mediastreamsegmenter can dump the LL-HLS segments/playlists directly to the configured location directory in NGINX. For example, you can create a directory called “llhls/” under your NGINX root directory. Then, under llhls/, you need to create multiple sub-directories to hold the LL-HLS segments and playlists, one for each rendition, e.g., “llhls/low”, “llhls/mid”, “llhls/high”. Before running mediastreamsegmenter, you need to copy lowLatencyHLS.php to each of the per-rendition folders (This is covered in the instruction of Apple’s HLS tools [1]). You also need to place the master playlist under “llhls/” and the master playlist shall refer to the right rendition folder for holding LL-HLS segments and variant playlists, so that they can be found by NGINX and run by PHP-FPM.

The instruction of Apple’s HLS tools ([1]) shows how to start mediastreamsegmenter. You need to specify the right folder for outputting the LL-HLS segments and playlists using the “-f” option. You also need to specify the right multicast address and port for receiving the live source streams. It is better to run mediastreamsegmenter and your live source generator (e.g., FFmpeg) from the same local area network, so as to avoid complex multicast routing configuration. Make sure the firewall does not block the UDP ports for ingestion. Avoid using multicast addresses starting with “224.” as some of them are reserved, anything between 225 and 239 should be good. Finally, you need to specify the output segment formats. The default is MPEG-TS, you can also specify “ — iso-fragmented” (FMP4) or “ — cmaf-fragmented” (CMAF). For playback using Shaka player and Hls.js, FMP4 or CMAF should be used. For AVPlayer, both CMAF and TS should work, but somehow I found it is better to use TS for performance concern. You can run “man mediastreamsegmenter” to view more options. When a mediastreamsegmenter instance starts to run, it waits for live source streams to come in.

Running FFmpeg to generate the live source streams

FFmpeg can be used to generate the live source streams, and send them to mediastreamsegmenter(s). You can use webcam, screen capturing, repeatedly playing a local file, or receiving from another external live source. For using local files as the sources, you can use “-re -i [file_name]” to read from a local file, and use “-stream_loop -1” to play it repeatedly. Next, you can transcode the source input to multiple renditions in a single FFmpeg command, such that all the renditions are time-synchronized. Unlike tsrecompressor which supports up to 4 renditions with a fixed set of bitrates, FFmpeg is theoretically unlimited in the number and encoding profiles of the renditions. You only need to start the same number of mediastreamsegmenter(s) to handle all the renditions. You can specify the resolutions, bitrates, the output format (e.g., mpegts) and destinations (IPs and ports) for your renditions. However, you need to keep the frame rate and key frame interval fixed for the generated live source, e.g., “-force_key_frames “expr:gte(t,n_forced*1)””, and “-r 30”. Fixed key frame interval is needed to avoid variable segment duration. Players may not play stably with variable segment duration. Note that mediastreamsegmenter has to be running when FFmpeg transcoder starts. Shortly after you start FFmpeg, mediastreamsegmenter will start to ingest and output LL-HLS stream. Finally, re-encoding multiple renditions could be very resource-consuming, make sure your machine has enough power to do the work. If your machine does not possess the power to run both mediastreamsegmenter and FFmpeg, you may run them in different machines. However, mediastreamsegmenter may complain about “un-aligned segments”. If this happens, you can configure mediastreamsegmenter and FFmpeg to communicate via an unicast IP address, instead of multicast IP address.

Playback

This LL-HLS server setup has been tested using 4 video players that support LL-HLS. You can build simple player applications using the player core libraries provided by the players. For web players, you can run the players in Chrome. For Shaka player, make sure you set “lowLatencyMode” and “autoLowLatencyMode“ to true. For Hls.js, make sure you set “lowLatencyMode” to true. For Theoplayer, you may refer to [7] for how to configure LL-HLS playback.

For AVPlayer, LL-HLS support has been available since iOS 13.3 in 2019. You will need to write a simple iOS player application using the AVFoundation SDKs. Nothing special is needed to enable low-latency mode. Xcode is needed to run the player application on an iPhone. You can also write player applications for MacOS or iPadOS.

Troubleshooting

  1. If your player does not start because it fails to download the variant playlists (lowLatencyHLS.php), make sure your NGINX can talk to PHP-FPM. Variant playlists can fail to load if the PHP script does not run properly.
  2. Make sure your LL-HLS master playlist describes the stream and renditions correctly and matches your FFmpeg encoding profiles.
  3. Make sure your server runs on a machine with enough system resources, because FFmpeg transcoding can consume lots of CPU cycles and memory.
  4. Make sure HTTPS is properly configured in NGINX and in the players and browsers. Make sure CORS (cross domain access) is properly configured in NGINX. Alternatively, you may also host your (web) players on the same server as the LL-HLS stream.
  5. For AVPlayer, make sure your CA-signed certificate is used on both client and server side, and CA root certificate is trusted by your Apple devices. Before running AVplayer on real devices, first run it on emulated devices. The error logs and error numbers printed on the Xcode console can also reveal lots of debug information.
  6. You can use Apple’s mediastreamvalidator ([1]) to validate your LL-HLS stream if playback fails, it should provide lots of hints.
  7. You can always run your LL-HLS streams first in Safari. Even though Safari won’t play the stream at low latency, it can help trouble-shooting the issues.
  8. If AVPlayer does not play the stream at low latency, make sure you are running an iOS version greater than 13.3.

If you have any questions about this tutorial, feel free to contact me at maxutility2011@gmail.com.

References

[1]. About Apple’s HTTP Live Streaming Tools, https://developer.apple.com/documentation/http_live_streaming/about_apple_s_http_live_streaming_tools

[2]. HTTP Live Streaming, 2nd edition, https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-08

[3]. NGINX, https://github.com/nginx/nginx

[4]. Php-fpm: https://php-fpm.org

[5]. Creating Certificates for TLS testing, https://developer.apple.com/library/archive/technotes/tn2326/_index.html#//apple_ref/doc/uid/DTS40014136

[6]. Self-Signed certificates in iOS apps, https://medium.com/collaborne-engineering/self-signed-certificates-in-ios-apps-ff489bf8b96e

[7]. How to configure THEOplayer to play your low-latency HLS streams, https://docs.theoplayer.com/how-to-guides/07-miscellaneous/03-low-latency/02-configure-ll-hls.md

About the author

Bo Zhang is currently a staff video research engineer at Brightcove inc. His area of expertise include online video streaming, IP networking and telecommunications. He received Ph.D. degree from George Mason University, M.S degree from University of Cincinnati, and B.S. degree from Huazhong University of Science and Technology, all in computer science. He received the best paper award from ACM MSWiM 2011 — the flagship research conference of ACM SIGSIM.

Bo Zhang is currently a staff video system engineer at Brightcove. He works for the video research team on video delivery, playback and cdn.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store