Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - Document PipelineCache and related types #5600

Closed
wants to merge 7 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 90 additions & 1 deletion crates/bevy_render/src/render_resource/pipeline_cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@ enum PipelineDescriptor {
ComputePipelineDescriptor(Box<ComputePipelineDescriptor>),
}

/// Union of a render and compute pipeline.
djeedai marked this conversation as resolved.
Show resolved Hide resolved
///
/// Used to store an heterogenous collection of render and compute pipelines together.
#[derive(Debug)]
pub enum Pipeline {
RenderPipeline(RenderPipeline),
Expand Down Expand Up @@ -52,14 +55,28 @@ struct CachedPipeline {
state: CachedPipelineState,
}

/// State of a cached render pipeline.
#[derive(Debug)]
pub enum CachedPipelineState {
/// The pipeline GPU object is queued for creation.
Queued,
/// The pipeline GPU object was created successfully and is available (allocated on the GPU).
Ok(Pipeline),
/// An error occurred while trying to create the pipeline GPU object.
Err(PipelineCacheError),
}

impl CachedPipelineState {
/// Convenience method to "unwrap" a pipeline state into its underlying GPU object.
///
/// # Returns
///
/// The method returns the allocated pipeline GPU object.
///
/// # Panics
///
/// This method panics if the pipeline GPU object is not available, either because it is
/// pending creation or because an error occurred while attempting to create GPU object.
pub fn unwrap(&self) -> &Pipeline {
match self {
CachedPipelineState::Ok(pipeline) => pipeline,
Expand All @@ -72,7 +89,7 @@ impl CachedPipelineState {
}

#[derive(Default)]
pub struct ShaderData {
struct ShaderData {
djeedai marked this conversation as resolved.
Show resolved Hide resolved
pipelines: HashSet<CachedPipelineId>,
processed_shaders: HashMap<Vec<String>, Arc<ShaderModule>>,
resolved_imports: HashMap<ShaderImport, Handle<Shader>>,
Expand Down Expand Up @@ -263,6 +280,18 @@ impl LayoutCache {
}
}

/// Cache for render and compute pipelines.
///
/// The cache stores existing render and compute pipelines allocated on the GPU, as well as
/// pending creation. Pipelines inserted into the cache are identified by a unique ID, which
/// can be used to retrieve the actual GPU object once it's ready. The creation of the GPU
/// pipeline object is deferred to the [`RenderStage::Render`] stage, just before the render
/// graph starts being processed, as this requires access to the GPU.
///
/// Note that the cache do not perform automatic deduplication of identical pipelines. It is
/// up to the user not to insert the same pipeline twice to avoid wasting GPU resources.
///
/// [`RenderStage::Render`]: crate::RenderStage::Render
pub struct PipelineCache {
layout_cache: LayoutCache,
shader_cache: ShaderCache,
Expand All @@ -272,6 +301,7 @@ pub struct PipelineCache {
}

impl PipelineCache {
/// Create a new pipeline cache associated with the given render device.
pub fn new(device: RenderDevice) -> Self {
Self {
device,
Expand All @@ -282,16 +312,25 @@ impl PipelineCache {
}
}

/// Get the state of a cached render pipeline.
///
/// See [`PipelineCache::queue_render_pipeline()`].
#[inline]
pub fn get_render_pipeline_state(&self, id: CachedRenderPipelineId) -> &CachedPipelineState {
&self.pipelines[id.0].state
}

/// Get the state of a cached compute pipeline.
///
/// See [`PipelineCache::queue_compute_pipeline()`].
#[inline]
pub fn get_compute_pipeline_state(&self, id: CachedComputePipelineId) -> &CachedPipelineState {
&self.pipelines[id.0].state
}

/// Get the render pipeline descriptor a cached render pipeline was inserted from.
///
/// See [`PipelineCache::queue_render_pipeline()`].
#[inline]
pub fn get_render_pipeline_descriptor(
&self,
Expand All @@ -303,6 +342,9 @@ impl PipelineCache {
}
}

/// Get the compute pipeline descriptor a cached render pipeline was inserted from.
///
/// See [`PipelineCache::queue_compute_pipeline()`].
#[inline]
pub fn get_compute_pipeline_descriptor(
&self,
Expand All @@ -314,6 +356,13 @@ impl PipelineCache {
}
}

/// Try to retrieve a render pipeline GPU object from a cached ID.
///
/// # Returns
///
/// This method returns a successfully created render pipeline if any, or `None` if the pipeline
/// was not created yet or if there was an error during creation. You can check the actual creation
/// state with [`PipelineCache::get_render_pipeline_state()`].
#[inline]
pub fn get_render_pipeline(&self, id: CachedRenderPipelineId) -> Option<&RenderPipeline> {
if let CachedPipelineState::Ok(Pipeline::RenderPipeline(pipeline)) =
Expand All @@ -325,6 +374,13 @@ impl PipelineCache {
}
}

/// Try to retrieve a compute pipeline GPU object from a cached ID.
///
/// # Returns
///
/// This method returns a successfully created compute pipeline if any, or `None` if the pipeline
/// was not created yet or if there was an error during creation. You can check the actual creation
/// state with [`PipelineCache::get_compute_pipeline_state()`].
#[inline]
pub fn get_compute_pipeline(&self, id: CachedComputePipelineId) -> Option<&ComputePipeline> {
if let CachedPipelineState::Ok(Pipeline::ComputePipeline(pipeline)) =
Expand All @@ -336,6 +392,19 @@ impl PipelineCache {
}
}

/// Insert a render pipeline into the cache, and queue its creation.
///
/// The pipeline is always inserted and queued for creation. There is no attempt to deduplicate it with
/// an already cached pipeline.
///
/// # Returns
///
/// This method returns the unique render shader ID of the cached pipeline, which can be used to query
/// the caching state with [`get_render_pipeline_state()`] and to retrieve the created GPU pipeline once
/// it's ready with [`get_render_pipeline()`].
///
/// [`get_render_pipeline_state()`]: PipelineCache::get_render_pipeline_state
/// [`get_render_pipeline()`]: PipelineCache::get_render_pipeline
pub fn queue_render_pipeline(
&mut self,
descriptor: RenderPipelineDescriptor,
Expand All @@ -349,6 +418,19 @@ impl PipelineCache {
id
}

/// Insert a compute pipeline into the cache, and queue its creation.
///
/// The pipeline is always inserted and queued for creation. There is no attempt to deduplicate it with
/// an already cached pipeline.
///
/// # Returns
///
/// This method returns the unique compute shader ID of the cached pipeline, which can be used to query
/// the caching state with [`get_compute_pipeline_state()`] and to retrieve the created GPU pipeline once
/// it's ready with [`get_compute_pipeline()`].
///
/// [`get_compute_pipeline_state()`]: PipelineCache::get_compute_pipeline_state
/// [`get_compute_pipeline()`]: PipelineCache::get_compute_pipeline
pub fn queue_compute_pipeline(
&mut self,
descriptor: ComputePipelineDescriptor,
Expand Down Expand Up @@ -494,6 +576,12 @@ impl PipelineCache {
CachedPipelineState::Ok(Pipeline::ComputePipeline(pipeline))
}

/// Process the pipeline queue and create all pending pipelines if possible.
///
/// This is generally called automatically during the [`RenderStage::Render`] stage, but can
/// be called manually to force creation at a different time.
///
/// [`RenderStage::Render`]: crate::RenderStage::Render
pub fn process_queue(&mut self) {
let waiting_pipelines = mem::take(&mut self.waiting_pipelines);
let mut pipelines = mem::take(&mut self.pipelines);
Expand Down Expand Up @@ -654,6 +742,7 @@ fn log_shader_error(source: &ProcessedShader, error: &AsModuleDescriptorError) {
}
}

/// Type of error returned by a [`PipelineCache`] when the creation of a GPU pipeline object failed.
#[derive(Error, Debug)]
pub enum PipelineCacheError {
#[error(
Expand Down