-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HLS+: Support HLS Edge Cluster #466
Comments
This feature essentially aims to solve the issues with HLS edge, which include the following problems with HLS:
Therefore, SRS's edge mode cannot dynamically generate HLS and cannot serve clients after real-time sourcing, as is possible with HTTP FLV. This is also the fundamental reason why HLS cannot achieve low latency.
|
If HLS is made into a stream, the delay can be reduced to the same level as FLV. When the client requests the M3U8, it is redirected to an M3U8 with a UUID parameter, for example, m3u8?uuid=154698. This UUID is the user's identifier, and the TS files also carry this identifier. In this way, the edge server can dynamically generate HLS when the user connects. Instead of destroying this structure when the user closes the connection, it is kept for the next time the user reconnects. This means that there is no need to do HTTP origin fetching, only RTMP origin fetching. The efficiency is not as high as HTTP, but the delay is much lower.
|
This method solves the problem of HLS edge not being able to recognize users. Some players close the connection after playing one ts, so the server has no idea which ones are part of the same link.
|
After all the talk, the conclusion is that there is no need to make it a traditional HTTP edge, just make the edge support HLS, just like the FLV edge, both using RTMP for origin.
|
The advantages of this solution are as follows:
Operational advantages:
The weaknesses of this solution are:
|
Real-time trans-encapsulation and segmentation of TS on the edge machine, and cache the latest 3-5 segments, updating the M3U8 file in real-time (in-memory cache). Additionally, there is no need to identify a TS player, as long as there is an absolute reference for timestamps on the current machine. In practice, the consumption of real-time trans-encapsulation is minimal and can be resolved.
|
Converting to ts in real-time and caching it is indeed a good solution. It is like sharing a slice of the connection with other connections. When should the slicing stop? Timeout?
|
Generate ts slices, as long as the upstream stream is flowing, continuously perform a round-robin BUFFER, eliminating old slices, and update the corresponding M3U8 (the number of segments inside should be less than the cache to prevent the TS segment from being eliminated when a request comes in). When the upstream stream stops flowing, the slicing action naturally pauses. This slicing cache is not bound to the connection of the player, but bound to the lifecycle of the current stream on this machine.
|
In other words, as long as someone is pushing a stream to the origin server, the edge starts to slice, right?
|
The edge of the source station is currently being synchronized. Those that have not synchronized this stream will remain unchanged. In addition, with IDLE recovery, it will be sufficient. The statement "the upstream stream is flowing" refers to the stream on the edge synchronization source, and it is not clear whether this stream is flowing. There are two options for when this slicing action stops. One is as mentioned above, to bind it to the lifecycle of this stream on the current machine and stop when the stream is recycled. The other is to start when a user accesses HLS and stop when no one has accessed it for a certain period of time. The downside of this option is that the first user to access on each machine will experience a cold start delay, but it is not a major issue and depends on the tolerance level of the application scenario.
|
Let me confirm one thing with you first, our terminology may be different, so we might not be talking about the same thing. When you mention "synchronizing the source station at the edge," are you referring to someone playing a stream on the edge, where the edge retrieves the stream from the source station and provides the service? If that's the case, let's use the term "origin pull" to refer to this situation. However, there is one issue:
I'm not sure if I understood correctly.
|
When users are playing an edge RTMP stream, the stream is being sourced. If new users play the HLS stream of this sourced stream, the sourced stream provides HLS service. If all users playing the RTMP stream disconnect, but the HLS stream is still connected, then the service should still be provided. At this time, it is necessary to maintain the sourcing, which means it is still being sourced. This is contradictory to the situation where all users disconnect and no sourcing is needed. The misunderstanding of the fourth question stems from the difference in our structural design. In my previous design, there is a rotating buffer in the program that stores the original information of the stream (it can be an FLV-encapsulated stream or a custom one). Then all the components that communicate with it can be divided into two categories: INPUT PLUGIN (pull from the source, receive external PUSH) and OUTPUT PLUGIN (provide playback and synchronization to the outside). The current stream can provide services to the outside only if at least one INPUT PLUGIN is working. The stream can be reclaimed only when all OUTPUT PLUGINs have no users. HLS is just one of the OUTPUT PLUGINs, so in the fourth question, "all disconnect" refers to all OUTPUT directions, including RTMP and FLV OVER HTTP users disconnecting. This idle stream can be reclaimed.
|
In addition, the phrase "stop when no one visits for a certain period of time" means that when no HLS users visit for a certain period of time, HLS slicing can stop. Whether the origin action stops or not depends on whether there are no users in all OUTPUT directions.
|
Hmm, understood. If we consider HLS and HTTP-FLV on the edge as a structure, where slicing is triggered (FLV slices are deleted when playback stops, while HLS can use timeouts), and HLS can share slices (FLV does not need to share), then there is no conflict. In fact, we should wait for all clients (RTMP disconnect, HLS timeout, HTTP-FLV disconnect) to disconnect before considering cleaning up this origin. In other words, for RTMP and FLV, disconnection can be used, while HLS uses timeouts as disconnection. This way, the three types of distribution can be unified on the edge without any conflicts. Thank you~
|
This is the meaning.
|
I named this feature: HLS+. |
It seems that MSE and WEBRTC are becoming more and more strong, so HLS+ has no place now, we can use MSE for 3s+ live and WEBRTC for 300ms communication. |
HLS+ is quite complex, so it is worth considering implementing LLHLS, as well as more standard protocols like H3-FLV. Enabling HLS on the edge should result in an error. Please refer to issue #1066 for more information.
|
Currently, only RTMP has edge and origin servers to form a load balancing and fault-tolerant cluster. HLS distribution only supports origin servers. After SRS cuts out HLS files, it serves as an HTTP origin server for distribution. If SRS supports HTTP edge servers, it will support HTTP origin servers and edge clusters, allowing for more comprehensive statistics and control.
TRANS_BY_GPT3
The text was updated successfully, but these errors were encountered: