-
Notifications
You must be signed in to change notification settings - Fork 750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Optimization Discussion #3437
Comments
@Slind14 is there base performance numbers of current server which I can refer to? |
What are you looking for exactly? We believe that Prebid Server Go could be 6 to 10 times faster than it is now if we compare it with other internal projects we have. However, this level of optimization and complexity is unreasonable for an open-source project whos goal is to keep contributions simple. Hence the suggestion is to focus on replacing some of the "slow" libraries and removing obsolete memory allocations where it is trivial -> not hurting contributions. |
We welcome improvements to the project, especially those related to performance. Prebid Server is often run at a very high scale, where efficiency gains can directly translate to cost savings and reduced emissions. Prebid Server uses established and well-maintained libraries. It's important to choose libraries which are likely to issue security updates, properly handle edge cases, and improve their design as Go evolves. Some of the libraries were chosen many years ago and a re-evaluation is a good idea. Prebid Server is written in pure Go to give hosts the flexibility to choose their preferred operating system and cpu architecture. I don't see an issue with including libraries optimized for a specific platform, provided we maintain pure Go fallbacks or can determine there is no negative impact to existing hosts. You're welcome to experiment with performance optimizations and submit a PR with your proposal. Please provide benchmarks to demonstrate the change. There's not much benchmark coverage in this project, so you'll likely need to write new ones in your PR.
There are three scenarios for GZIP compression in Prebid Server:
For scenario 1 we currently use "NYTimes/gziphandler" and for the other scenarios we use the standard "compress/gzip". You're welcom to experiment with other libraries provided they adhere to the guidelines above.
The line you commented on is creating a new slice, which uses some extra memory, but does not copy the underlying array. There is potential to use a sync.Pool for the gzip encoding buffers. The buffer is returned from getRequestBody in the form of a slice pointing to the buffer memory which is then used as part of the http request object. You'll likely need to refactor the lifecycle of the buffer to achieve reuse with a pool.
We recently switched from "encoding/json" to "json-iterator/go" after reviewing alternative options. We dismissed the "mailru/easyjson" library because its no longer maintained and has as pre-build step. Though, I think the community would support a pre-build steps for a significant performance gain. While many projects claim to be compatible as a drop-in replacement for "encoding/json", in reality none are. We choose "json-iterator/go" because it was the closest and required the least amount of modification for a 2-3x performance gain. There were other libraries we evaluated which encode faster and use less allocations such as "goccy/go-json", but those failed in specific edge cases covered by Prebid Server tests. As part of replacing the json library, we created new utility methods in "util/jsonutil" to centralize the library call. This will make it easier for you, or others, to try different libraries.
This is planned. We first need to fix a compatibility issue in "json-iterator/go" where RawMessage objects are left completely untouched but are compacted in "encoding/json". While the behavior of "json-iterator/go" may be preferred, we need to ensure all Prebid Server adapters are compatible with a potential change. With over 200 adapters, we don't have the ability to test them individually.
As a prerequisite to changing the logging library, we'd like to create an intermediary type similar to what we did with json encoding. That will decouple this project from a specific library and make it easier to replace - or offer a choice to hosts. We haven't had the bandwidth to work on that refactor and community contributions are encouraged.
Compatibility is paramount and this change is harder to verify before deployment. Do you run Prebid Server and have the ability to run experiments in production (at a low traffic level) to help prove other libraries are as complete in handling edge cases as the standard Go library?
We encourage hosts to tweak their GC settings to better accommodate a large number of heap allocations. With this optimization, the performance impact or reducing allocations is less noticeable - but still a positive impact to the project. I recommend to keep PRs of this kind smaller in scope for a quicker review.
That's a bold goal. I kindly dare you to try. :) |
Discussed in committee. We agreed to spin off a sub-committee to work through the details. Will start off the conversation in the Prebid Slack workspace |
Thank you, @bretg. @SyntaxNode thank you for the detailed response as well. For your information, the proprietary tech we have is doing an equal amount of QPS at 1/8th of the CPU usage while spending +90% of its time on machine learning predictions. Hence, there is a lot of room for improvement :) |
Does it make sense to break these into different issues, or somehow subissues, to track the work? Are there similar opportunities in Java? |
Hi Prebid Server Community,
We would like to start a discussion about improving the performance of Prebid Server (Go).
These recommendations come from the experience of operating Golang at a high scale, very efficiently.
Compressions
JSON
Logging
HTTP
General Optimization
Overall, it should be possible to double if not triple the Prebid Server compute requirements for the same QPS.
While the need to reduce cost and improve the carbon footprint might not be a lot yet, more and more is moved server-side and these improvements should show value long-term.
The question is, what level of changes would the community feel comfortable with?
The text was updated successfully, but these errors were encountered: