How to save bandwidth under HD video?

Posted Jun 4, 202015 min read

Introduction: Data shows that domestic Internet traffic is consumed 200EB per month, and 80%of the traffic consumption comes from the video field. With the popularization of 5G and rapid development of cloud production and broadcasting, traffic consumption will become larger and larger, and behind this is the very high bandwidth cost. How can technological innovation enable users to smoothly watch high-definition while achieving a continuous reduction in average bandwidth costs? This article is shared by Alibaba Entertainment's technical expert Qijia. It will explain in detail the latest exploration of entertainment in the field of cloud-side content distribution, hoping to inspire technical students in the field of audio and video and pan-content distribution.(Welfare at the end of the article:download the "Culture Entertainment Audio and Video Core Technology" e-book)

image.png

A technical challenge to save bandwidth costs

Starting from the actual business scenario, the proposition of saving bandwidth costs is facing many technical challenges.

First, there are many types of terminals for Youku services. The mobile terminals include Android, IPhone, and IPad; the PC terminal has Window and MAC, and the Web terminal, OTT terminal, and so on. Different terminals have different characteristics, and the processing mechanism will be different. Even for terminals of the same type, special adaptations are required for special models.
image.png

Second, there are many types of video services. Like the live broadcast service, on-demand service, cache download service, and short video service, each service focuses on different indicators. For example, on-demand will focus on whether the playback is smooth; live broadcast also focuses on delay; cache download mainly focuses on download speed.

Third, the diversification of application scenarios. Even with the same business, there are many subdivision scenarios, and there are differences in processing strategies. For example, there are smart files for live streaming and on-demand, while downloading is available for downloading under cache, and double-speed playback for on-demand. Different network types will also differ in processing.

Fourth, there are more videos and more long-tail videos.

  • Youku has massive resources, and the storage of edge nodes is limited, and not all resources can be stored. You need to find the hottest resource storage.
  • There are many video formats, and different definitions will cause the supply node to be diluted, resulting in insufficient supply.
  • Remarkable long-tail feature, most videos have a small amount of playback per day, and a processing solution is also required.

2 Technology strategy:Cloud-side content distribution network

The cloud-side content distribution network, or PCDN for short, is based on P2P technology. By mining massive fragmented idle resources, a high-quality and low-cost content distribution network system is built.

image.png

PCDN is a three-level network acceleration system:

  • The first level is the cloud, which is the CDN.
  • The second level is the edge network, including edge nodes, routing equipment, commercial WiFi, etc. These nodes will not directly participate in consumption, mainly as a source node to provide uplink for other nodes.
  • The third level is terminal equipment. These equipment are the main consumers of traffic. A small part of the nodes with strong capabilities can also supply other equipment.

When a device plays video, the PCDN network can provide video acceleration to ensure stable and smooth video playback; by reasonably switching downloads from different nodes, the cost is minimized.

As can be seen from the PCDN system architecture, the cloud-side content distribution network is divided into three levels of networks, covering more than ten types of nodes, each type of node has different upstream capabilities, bandwidth costs, and storage capabilities. Next, we compare the capabilities and characteristics of various types of nodes in the first, second, and third levels from the five aspects of bandwidth cost, uplink capacity, storage size, node size, and resource positioning, and finally locate each type of node.

image.png

The first-level node has strong uploading capacity and stable service, but the bandwidth cost is high, so it is used to download necessary and urgent data.

The upload capability of the second-level node is slightly worse than that of the first-level node, but it is available 24 hours online and can provide more stable services. Under certain conditions, it can be used instead of the first-level node.

The upload ability of the third-level nodes is weak, and the storage is small, and stable online is not guaranteed. The advantages are large scale and low cost. It can be downloaded through multiple points and the nearest download, which greatly reduces the cost.

Therefore, according to the characteristics of the three nodes, the advantages are avoided and the advantages of each type of nodes are maximized. For example, a tertiary node has a small storage, and to maximize its upstream bandwidth, cache the very hot resources at the head. The secondary node has a large storage, which can stabilize the supply source, and can store mid-heat and long-tail videos.

Three PCDN bottom cornerstones:P2P basic principles

The cloud-side content distribution network uses P2P network technology in the bottom layer. This picture is a simple P2P model:

image.png

Both node A and node B have watched the 12th episode of "Bing Tang Zhu Li". Node A will watch it first. After watching, there will be a part of the data stored locally. When node B is watching, you can directly from node A through point-to-point data The data is downloaded by means of transmission. It is not necessary to download all data from the CDN.

The entire data sharing process can be split into four key links:resource caching, node allocation, download scheduling, and data sharing.

image.png

1 Resource storage

The first link is resource storage. Three things were done in this session:

  • How to identify a resource?
  • Normalize all kinds of resources into slices of fixed size.
  • Decide which resources should be stored and which resources should not be stored?

Resource identification

To identify a resource, the first thing you will think of is a URL, but the URL is too long, inconvenient during the interaction, and will increase the transmission bandwidth loss; in addition, many variable elements in the URL, such as timestamps, authentication information The interference information must be removed, otherwise it will cause the same video to generate different resource IDs and cannot be shared with each other.

Youku has a set of resource ID generation algorithms, which will generate globally unique resource IDs based on the key feature information of the URL, and the length is less than the URL length, which is convenient for interaction.

Resource Grouping

First, Youku has massive video resources and some files are very large. Once the cache limit is exceeded, the entire resource cannot be cached locally and P2P sharing cannot be done, so it is necessary to split it.

Second, different video formats, such as the HLS format, will request a TS index file first, and then each ts in turn. If different video formats are processed separately, the logic complexity and maintenance cost are very high. Therefore, we will group the video resources at the download entrance and normalize them into fixed-size files, uniformly download the kernel, simplify logic, and reduce maintenance costs.

Third, it helps improve sharing efficiency. Earlier on the P2P principle, node A does not need to wait for the entire video to be watched before the previous data can be shared. If the entire video is used as a unit of sharing, the entire video data must be downloaded, and the efficiency will be very low.

Fourth, improve cache utilization. It is a basic common sense that if the video is divided by time, the number of plays at each time point is not the same. For example, there are very few people watching the titles at the beginning and the end. In addition, careful students will notice that there are many plot tips above the progress bar, and the wonderful plots will be watched repeatedly. Especially in variety shows, when a star appears, the number of views will increase significantly. As for the wonderful plot, many people will drag the video and jump to play. Therefore, after slicing, we can store more videos with more playback volume and less videos with less playback volume, thereby improving the overall resource utilization and P2P sharing rate.

Resource storage

At the time of downloading each segment, the consumer requests the list of nodes corresponding to the segment from the server, and the server can calculate the approximate playback volume based on the number of times the segment is requested. And the server has access records of all resources, you can determine which shards are popular and which are unpopular. The edge node gets this information when interacting with the server to decide which resources should be stored and which should not.

image.png

2 Node assignment

The second link is node allocation. In this link, including node selection, node scheduling and intelligent distribution.

Node Screening

The first is filtering based on NAT type. The so-called NAT is address mapping, which was introduced to solve the problem of shortage of IPv4 addresses. Most of the home network terminal devices are under the route. The actual assignment is an internal IP address. Accessing the external network will map a public network IP and port through NAT. There are four common types of NAT:full cone, Address restricted type, port restricted type and symmetric type. Among them, the connectivity between "symmetrical and symmetrical nodes" and "symmetrical and port restricted nodes" is very poor. Therefore, we will filter out the nodes that cannot be connected according to the NAT type, and provide the percentage of valid nodes returned.

The second is node quality filtering. In a large-scale node network, node capabilities are uneven. If high-quality nodes and poor-quality nodes are mixed together, the cost of end-side node quality assessment will increase. Therefore, it is necessary to filter out the nodes with poor quality and improve the overall quality of the returned nodes.

Node scheduling

The first is the principle of proximity. In the distance dimension, the nodes are divided into 5 ranges, ranging from small to large, in order:adjacent nodes, this city, this province, this region, and the whole country. Among them, neighboring nodes mainly refer to the same community, enterprise, campus, etc. From the data point of view, the closer the distance, the faster the speed, the lower the delay.

The second is the ability matching principle. For example, the upstream capacity of node A is twice that of node B. Then, in terms of the number of allocations, nodes are allocated in a ratio of 2:1, so that the upstream capacity of nodes A and B can be maximized. Otherwise, if it is distributed in a 1:1 ratio, there may be problems such as A's upstream running is not full, and node B's load is too high.

Smart Distribution

The previous screening and scheduling are mainly for supply-side node information. The dynamic allocation is mainly aimed at the information on the consumer side, that is, when the consumer node requests the node, it will report some currently requested information, dynamically calculate the number of nodes that need to be returned, and the proportion of different nodes.

The first is clarity, the higher the clarity, the more nodes are needed.

The second is the buffer water level. When the buffer water level is low, more high-quality nodes are allocated to increase the download speed; when the buffer water level is higher, more low-cost nodes are allocated to reduce the bandwidth cost.

The third is that different nodes return at a certain ratio to ensure that each type of node can be allocated, while also giving some decision space to the end.

3 download schedule

The third link is download scheduling. In this link, including scheduling strategy, node management and task allocation.

Scheduling strategy

In the graphic on the left, the following is the playback progress bar. The location of this point is the playback point. The data on the left of the playback point is the data that has been broadcast, and it is the blue line. The gray section on the right of the playback point is the section where the data is cached locally, and then the right section. This is the data interval to be downloaded. This point is also called the download point.

image.png

In download scheduling, our basic principle is:experience first, and cost.

According to the level of the buffer level behind the playback point, the buffer level can be divided into several sections. The red section is the emergency zone, indicating that the current buffer data is relatively small. If the data is not supplemented in time, it may affect the playback experience. Download from the CDN node, fill the buffer quickly; the middle section is the transition area, the buffer data is not so urgent, you can let part of the data be downloaded from the second and third levels, if the buffer water level drops, you can continue to use the first level, if the buffer water level Increase, you can use two or three levels to take over the download. The rightmost section is the safe zone, which can be downloaded at both the second and third levels. Of course, there will be a lot of detailed strategies, such as these two watermarks, which are not fixed, but will be decided in real time and dynamically adjusted according to the historical stuck situation, current network quality, number of P2P nodes, and download speed .

Node Management

The first is the node acquisition. Here, in order to save the loss of interaction with the server, when the previous shard is not finished, you can obtain the node of the next shard in advance, and establish a part of the connection in advance, when the current shard starts to download At that time, you can directly send the task request. During the download process of each node, the nodes will be scored according to the information such as the node's first packet time, download speed, task completion quantity and quality. Finally, the task will gradually converge to the nodes with good quality, and the nodes with poor quality will be eliminated gradually.

Task Assignment

Here we follow a more laborious and more profitable method, giving priority to nodes with fast download speeds and good quality, and assigning tasks to more tasks. At the same time, it will monitor the current download speed, RTT of each node, and predict whether the following data will time out in advance. If the prediction will time out, the task will be recovered in advance and distributed to other nodes without changing the task. time out.

image.png

4 Data sharing

The fourth link is data sharing. The process of data sharing is actually to establish a connection between two nodes, send a task request, and after the supply end receives it, if there is data locally, it will return the actual data. Finally, at the consumer end, the data will be checked uniformly to determine whether it has been tampered with to ensure data consistency. Such a process.

Node connection
Here we focus on the connection between the nodes. The connection here refers to the establishment of a kind of connectivity between the nodes, sending packets to each other, the other party can receive. There are three main ways to connect between nodes, including direct connection, reverse connection and hole punching. Direct connection is mainly used at the opposite end is a public network node, which can be directly accessed. The terminal device can be accessed through direct connection; the reverse connection is mainly used. The own party is on the public network and can be directly accessed. The opposite end is behind the NAT. At this time, it can send a reverse connection request to the other party s Relay. Forward it to this node, and then initiate a direct connection to itself by the peer, which can be used when the own party is on the public network, or it is a full cone address type. The last is to punch holes. This method is usually used when both nodes are behind NAT.

data transmission

After the nodes are connected, it is mainly protocol interaction and data transmission. Here we also researched a reliable UDP transmission method, and optimized the congestion control algorithm inside, adding mechanisms such as fast startup, packet loss prediction, fast retransmission, Make the transmission more efficient.

Data validation

Data verification is to ensure data consistency. In the P2P network, if there is dirty data, it is very fatal, because it will spread ten and ten, and quickly spread to the entire network, polluting the entire network data. In response to this, we also have a very complete set of solutions, from disk storage, upload link, network transmission, download link and other links have added a verification mechanism.

Common check methods include MD5, CRC, etc. MD5 has high security, but high performance overhead, and Crc has weak security, but low performance overhead. We have also adopted the technical solution of combining MD5+Crc to perform MD5 verification on key data and crc verification on non-critical data, which ensures data consistency and minimizes performance loss.

image.png

Generally, the higher the popularity of the episode, the more cost-effective it is. Because the volume of the hot drama is large, there are many source nodes, and the sharing effect is good. However, this rule does not apply to new sources. Because resource caching requires a time window, many hot dramas will choose to publish it at night, and users will watch it at the first time of release, and the newly released film source does not have so many source nodes in a short time. . In this time window, supply and demand are very unbalanced, and the P2P sharing rate is relatively low.

In response to this, we have adopted an intelligent pre-push and fast return to source solution. The so-called intelligent pre-push is to integrate the playback volume of the first few episodes, the variation curve of the playback volume of similar videos, and the playback information in the dimensions of regions, operators, clarity, etc., calculate the edge nodes to which regions, and how many copies are pushed , Before the new film goes live, push it in advance.

The so-called fast back-to-source is mainly for resources that are not easy to predict and have a sudden increase in access in a short time. By returning to the source to the edge node, the edge node quickly downloads the entire video and accelerates the node that watches the video later.

Through these two methods, when the popular episodes go online, there is no need to worry about the lack of supply nodes, or insufficient supply nodes, resulting in a low sharing rate.

image.png

Large-scale live broadcast mainly refers to the live broadcast of large-scale events and sports events, such as the Double 11 Cat Night, Spring Festival Gala, National Day Parade live broadcast, World Cup live broadcast, etc.

The live broadcast and on-demand scenes are quite different, and the challenge is also larger than on-demand, mainly in several aspects:

  • The first is low latency. In order to reduce the live broadcast delay, the amount of data that can be obtained in real time in the live broadcast scene is very limited, and everyone's playback points are close, and the data available for sharing is very small, so it will lead to insufficient supply nodes.
  • Secondly, the buffer water level is low. Due to the limited amount of data, generally only 2-3 ts can be obtained, and the duration is only a few seconds to a dozen seconds, which requires very high scheduling requirements. The water level of the emergency zone mentioned above is used to resist network jitter in tens of seconds, but the live broadcast will not have so many buffers. Therefore, if the scheduling strategy is not reasonable, it is easy to cause stuttering. However, too much buffer water level ceded in front will reduce P2P sharing.
  • The third is high dynamics. For on-demand, as long as the resource has been cached by the device, it does not require the user to watch the video online and can be shared externally, regardless of user behavior. However, this is not the case in the live broadcast scenario. Once the live broadcast room is exited in the live broadcast scenario, the data of this stream will be cut off and no other nodes can be provided with sources. In the live broadcast process, it is very common to enter and leave the live broadcast room.

In order to improve the efficiency of the overall sharing rate of live broadcasting, here we focus on introducing edge nodes. First, the edge node is used as a supply, and the service of the edge node is relatively stable; second, the edge node will maintain a high degree of synchronization with the CDN in the update of data content, and is used to replace the first level in most cases.

4 Practical experience sharing

For large and complex network systems, you need to learn to look at the system from different perspectives of dots, lines, and polygons, from large to small, deep in layers, while being able to jump out of the system, see the overall internal and external environment of the system, and interact upstream and downstream. In general, you need to understand the business model and operating principles of the overall system. On the surface, you need to be familiar with the division and interaction of various aspects of the system. For example, the PCDN contains the scheduling surface, management and control surface, business surface, basic services, and so on. Online, you need to disassemble the basic functional modules on each side, and the division of different levels, such as business side, there are upload, download, release and so on. In terms of points, you need to understand each technical point, algorithm strategy and so on.

First, in the face of a complex network system, the first step is to disassemble. Split a system into several subsystems, clarify the function of each subsystem, what is the input, what is the output. Of course, the dismantling is not enough. We must define indicators for each subsystem to reflect the advantages and disadvantages of the system, and display all the indicators through data and reports. This way you can see the whole picture of the entire system and where there are bottlenecks.

image.png

Secondly, technology must be deeply integrated with the business in order to maximize its value. For example, the double-speed playback mentioned earlier, if the download speed decision cannot be accurately obtained, it may be misjudged, resulting in a failure to download when the original CDN needs to be downloaded, causing a lag.

Finally, modeling and rapid iteration. In the entire PCDN network system, there are many scenarios and functions that can be abstracted into models in daily life. For example, download scheduling is like a reservoir model, and node scheduling is actually a supply and demand model. When the model is established, timely feedback is required. There are many ways to do this, such as pressure testing, training with a large number of samples, and other methods such as bucket verification. These methods can be used to judge the quality of the model and find The optimal solution of each parameter in the model.

Welfare is here | eBook Download "Culture, Entertainment, Audio and Video Core Technology"

image.png

In 2019, the vast majority of the entire Internet traffic comes from audio and video services. Taking Youku as an example, billions of videos are watched every day by hundreds of millions of users, and the daily Internet traffic consumed is up to PB level. How to make the user experience "clearer and more streamlined" is the core technology of Alibaba Entertainment. This book will start from the production of media resources, broadcast technology, broadcast control and copyright, and analyze the "Core Technology of Alibaba Entertainment Audio and Video" for you in detail.

Click " Read Original " to download now!

Related Posts