Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Optimization Discussion #3437

Open
Slind14 opened this issue Jan 30, 2024 · 7 comments
Open

Performance Optimization Discussion #3437

Slind14 opened this issue Jan 30, 2024 · 7 comments
Labels

Comments

@Slind14
Copy link

Slind14 commented Jan 30, 2024

Hi Prebid Server Community,

We would like to start a discussion about improving the performance of Prebid Server (Go).
These recommendations come from the experience of operating Golang at a high scale, very efficiently.

Compressions

  • Switch to “klauspost/compress” or “cloudflare/zlib” (not a go module)
  • Remove unnecessary byte copy & add the Buffer to the sync.Pool

JSON

  • Switch to “mailru/easyjson” (AMD and ARM) or to “bytedance/sonic” (faster but no ARM support)
  • Disallow adapters from using the golang built-in JSON

Logging

  • Switch to “rs/zerolog” or “uber-go/zap”

HTTP

  • Switch to “valyala/fasthttp”

General Optimization

  • Reduce unnecessary memory allocations within the core (not adapters)

Overall, it should be possible to double if not triple the Prebid Server compute requirements for the same QPS.

While the need to reduce cost and improve the carbon footprint might not be a lot yet, more and more is moved server-side and these improvements should show value long-term.

The question is, what level of changes would the community feel comfortable with?

@vnandha
Copy link

vnandha commented Feb 1, 2024

@Slind14 is there base performance numbers of current server which I can refer to?

@Slind14
Copy link
Author

Slind14 commented Feb 1, 2024

@Slind14 is there base performance numbers of current server which I can refer to?

What are you looking for exactly?

We believe that Prebid Server Go could be 6 to 10 times faster than it is now if we compare it with other internal projects we have. However, this level of optimization and complexity is unreasonable for an open-source project whos goal is to keep contributions simple.

Hence the suggestion is to focus on replacing some of the "slow" libraries and removing obsolete memory allocations where it is trivial -> not hurting contributions.

@SyntaxNode
Copy link
Contributor

SyntaxNode commented Feb 5, 2024

The question is, what level of changes would the community feel comfortable with?

We welcome improvements to the project, especially those related to performance. Prebid Server is often run at a very high scale, where efficiency gains can directly translate to cost savings and reduced emissions.

Prebid Server uses established and well-maintained libraries. It's important to choose libraries which are likely to issue security updates, properly handle edge cases, and improve their design as Go evolves. Some of the libraries were chosen many years ago and a re-evaluation is a good idea.

Prebid Server is written in pure Go to give hosts the flexibility to choose their preferred operating system and cpu architecture. I don't see an issue with including libraries optimized for a specific platform, provided we maintain pure Go fallbacks or can determine there is no negative impact to existing hosts.

You're welcome to experiment with performance optimizations and submit a PR with your proposal. Please provide benchmarks to demonstrate the change. There's not much benchmark coverage in this project, so you'll likely need to write new ones in your PR.

Switch to “klauspost/compress” or “cloudflare/zlib” (not a go module)

There are three scenarios for GZIP compression in Prebid Server:

  1. GZIP encoded requests to Prebid Server.
  2. GZIP encoded responses to Prebid Server from bidding servers.
  3. GZIP encoded requests from Prebid Server to bidding servers.

For scenario 1 we currently use "NYTimes/gziphandler" and for the other scenarios we use the standard "compress/gzip". You're welcom to experiment with other libraries provided they adhere to the guidelines above.

Remove unnecessary byte copy & add the Buffer to the sync.Pool

The line you commented on is creating a new slice, which uses some extra memory, but does not copy the underlying array. There is potential to use a sync.Pool for the gzip encoding buffers. The buffer is returned from getRequestBody in the form of a slice pointing to the buffer memory which is then used as part of the http request object. You'll likely need to refactor the lifecycle of the buffer to achieve reuse with a pool.

Switch to “mailru/easyjson” (AMD and ARM) or to “bytedance/sonic” (faster but no ARM support)

We recently switched from "encoding/json" to "json-iterator/go" after reviewing alternative options. We dismissed the "mailru/easyjson" library because its no longer maintained and has as pre-build step. Though, I think the community would support a pre-build steps for a significant performance gain.

While many projects claim to be compatible as a drop-in replacement for "encoding/json", in reality none are. We choose "json-iterator/go" because it was the closest and required the least amount of modification for a 2-3x performance gain. There were other libraries we evaluated which encode faster and use less allocations such as "goccy/go-json", but those failed in specific edge cases covered by Prebid Server tests.

As part of replacing the json library, we created new utility methods in "util/jsonutil" to centralize the library call. This will make it easier for you, or others, to try different libraries.

Disallow adapters from using the golang built-in JSON

This is planned. We first need to fix a compatibility issue in "json-iterator/go" where RawMessage objects are left completely untouched but are compacted in "encoding/json". While the behavior of "json-iterator/go" may be preferred, we need to ensure all Prebid Server adapters are compatible with a potential change. With over 200 adapters, we don't have the ability to test them individually.

Switch to “rs/zerolog” or “uber-go/zap”

As a prerequisite to changing the logging library, we'd like to create an intermediary type similar to what we did with json encoding. That will decouple this project from a specific library and make it easier to replace - or offer a choice to hosts. We haven't had the bandwidth to work on that refactor and community contributions are encouraged.

Switch to “valyala/fasthttp”

Compatibility is paramount and this change is harder to verify before deployment. Do you run Prebid Server and have the ability to run experiments in production (at a low traffic level) to help prove other libraries are as complete in handling edge cases as the standard Go library?

Reduce unnecessary memory allocations within the core (not adapters)

We encourage hosts to tweak their GC settings to better accommodate a large number of heap allocations. With this optimization, the performance impact or reducing allocations is less noticeable - but still a positive impact to the project. I recommend to keep PRs of this kind smaller in scope for a quicker review.

it should be possible to double if not triple the Prebid Server compute requirements for the same QPS

That's a bold goal. I kindly dare you to try. :)

@bretg
Copy link
Contributor

bretg commented Feb 28, 2024

Discussed in committee. We agreed to spin off a sub-committee to work through the details. Will start off the conversation in the Prebid Slack workspace

@Slind14
Copy link
Author

Slind14 commented Feb 28, 2024

Thank you, @bretg.

@SyntaxNode thank you for the detailed response as well.

For your information, the proprietary tech we have is doing an equal amount of QPS at 1/8th of the CPU usage while spending +90% of its time on machine learning predictions. Hence, there is a lot of room for improvement :)

@bretg
Copy link
Contributor

bretg commented Jul 26, 2024

@Slind14 and @bsardo - could we get an update on this effort overall?

@bretg bretg moved this from Research to In Progress in Prebid Server Prioritization Jul 26, 2024
@patmmccann
Copy link
Contributor

Does it make sense to break these into different issues, or somehow subissues, to track the work? Are there similar opportunities in Java?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: In Progress
Development

No branches or pull requests

5 participants