In this blog post our Acting Manager OTT Delivery, Jonas Rydholm Birmé, will describe our live origin that we built in-house and is running in the cloud. We built our live origin ourselves as we had some specific functionality we needed and the ability to have more than one CDN to distribute the content the last mile.
Let us first start by explaining the role of a Live Origin in streaming. The signal from a broadcast channel or live event is first transcoded by a Live Encoder to multiple resolutions with different sizes. This is to make it possible for the video player to select a resolution that fits within the available bandwidth to avoid any buffering. If the bandwidth situation improves the player can “step up” to a higher resolution and vice versa it can “step down”. To be able to step up and step down within a stream the video stream is chunked into smaller segments which is also done by the Live Encoder in this simplified scenario. The Live Encoder then needs to place the segments somewhere where the video player can download it from. This is the role of the Live Origin.
In reality the video player does not download segments directly from the Live Origin, and instead goes through a Content Delivery Network (CDN) to download the segments. The CDN is a network of nodes located closer to the end-users where every node caches the segment so when another end-user located near the same node wants to download a segment it gets the cached segment instead. To make sure we can always reach the end-user we are securing this last-mile delivery by having more than one CDN. So in our case we have at least two CDNs that fetches segments from our Live Origin.
Our Live Origin can be divided into three components where each component serves a specific use case:
- DRM Origin
- DAI Origin
- L2V Origin
The DRM Origin is responsible for delivering content protected (DRM) streams such as simulcast of our broadcast channels (SVT, TV4 and C More) we have in our services and sports event where DRM protection is mandated. DRM protection is added just-in-time and makes it possible for us to support basically all DRM formats (Apple, Microsoft and Google) and streaming formats today. It also provides us with the functionality that makes it possible to watch a sports event 48 hours after it was started.
The DAI Origin is purpose-built to handle our live streams where we dynamically insert online ads into the stream (Dynamic Ad Insertion). When you are watching Idol live or live news events on TV4 Play this is the origin that is used.
Our L2V Origin (Live to VOD) is also purpose-built to make it possible for our viewers to watch live events instantly after it has been aired. The streams that are uploaded to this origin are marked with markers where the TV commercial breaks were placed and we have developed a functionality that instantly creates a version where the commercials are removed or replaced with online ads. This VOD version is made available as soon as the event has ended.
As the amount of nodes that the CDNs are using can vary depending on their ability to cache and how many concurrent users are watching we need to protect our origin servers for such peak traffic. The way we protect our origin is to use a Origin Shield and it uses a feature called cache locking to do that. Cache locking means that if two or more clients wants to access the same segment at the same time only one of these requests is going down to the origin. This segment is then first cached in the shield and then delivered to the other clients. In practice the other clients have to wait for the segment to be cached before it is delivered.
Monitoring and Logging
It is important for us to get instant feedback on how well both our Live Origin performs but also how our overall OTT delivery is performing we have a number of monitoring points. Logging is also important in case of an incident to be able to narrow down the problem. Our first monitoring point is in the video player which contains a plugin that reports back to us for example how long it took to start the stream, number of bufferings and play failures. Second monitoring point is on the CDNs where we use a system that have probes and measures the performance of the CDNs. This system can also instantly turn off one CDN if it seems to behave badly. Third monitoring point is the outgoing traffic (egress) on the Origin Shield where we make sure we are within the network bandwidth limitations. We also monitor the cache hit ratio which means that if it would significantly drop our origins would be hit with more requests. Fourth monitoring point is on the origin servers where we monitor for example network traffic, read/write operations on disk, CPU, memory and disk utilization.
In addition to these monitoring points we also have a tool that simulates a video player and monitors all streams and alerts us if it fails to “playback” a stream.
We use a remote logging system so we can collect all server logs in the same place to make it easier to search and filter among all logs.
To handle code deploy of our Live Origin we use a server-template system by our cloud provider that enables us to automate starting up a new server. It installs the OS and necessary tools that we use for remote logging, SSH login and installs Docker engine. All our origin software is bundled inside a Docker container so when we need to update the software our CI system builds a new container, run unit tests for that container and if the tests are passed it is pushed to our Docker registry. Deploying the new Docker container version with the new code is however not fully automated as before we do any deploy on our Live Origin in production it has first been tested in our stage environment.
In summary, the reasons we built our own Live Origin was to be able to have support for multiple CDNs, support for content protection, possibility to watch a sports event 48 hours later, support to watch a catchup on-demand of a TV broadcasted live event with TV ads removed as early as directly after the event ended, and facilitate for dynamic in-stream ad-insertion.