[Core] Support global prefix caching #11385
Open
+328
−16
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
An extension of APC to implement global prefix cache
Global prefix cache can be useful in the following use cases:
This PR extends APC to implement global prefix cache. It simply uses a local Dict as the global prefix KV cache pool, write to it when KV cache computing in prefill phase is done and read from it when updating input_tokens in model_runner. Currently the implementation is simple, it doesn't involve model/cuda change and I can observe some performance improvement in my environment. I tested with some papers(10-40KB) on dataset Long-document-dataset on L4, compared with vanilla vllm, APC+this PR reduces the generation time around 10%~28% with the same result.
Next Steps
In theory it should work better when the prompt is longer, based on the assumption that CPU->GPU copy is faster than GPU KV cache re-computation in prefill phase, but I will do more testing on other datasets and hardware. The cpu<->gpu memory copy can be optimized to improve performance. It can also be integrated with other distributed KV cache pool projects. Please leave comments and feedbacks, thanks.