-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Overview
create_render_pipeline always compiles pipelines cold without utilizing wgpu's PipelineCache feature. This results in repeated shader compilation overhead on application startup and when creating new pipelines at runtime. By maintaining a long-lived wgpu::PipelineCache and passing it into pipeline creation, compilation can be amortized across pipeline builds and potentially persisted across sessions.
Current State
RenderPipelineBuilder::build() passes cache: None when creating render pipelines:
// crates/lambda-rs-platform/src/wgpu/pipeline.rs
let raw =
gpu
.device()
.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: self.label.as_deref(),
layout: layout_ref,
vertex: vertex_state,
primitive: primitive_state,
depth_stencil: self.depth_stencil,
multisample: wgpu::MultisampleState {
count: self.sample_count,
..wgpu::MultisampleState::default()
},
fragment,
multiview_mask: None,
cache: None, // No pipeline cache utilized
});This means every pipeline compilation starts from scratch, even when shader modules are shared or similar pipelines exist.
Scope
Goals:
- Add a long-lived
wgpu::PipelineCacheto theGpustruct - Pass the cache into
create_render_pipelinecalls - Expose API for cache persistence (save/load) for startup optimization
Non-Goals:
- Automatic cache invalidation strategy (rely on wgpu's internal handling)
- Compute pipeline caching (can be added in follow-up)
Proposed API
// In crates/lambda-rs-platform/src/wgpu/gpu.rs
pub struct Gpu {
// ... existing fields ...
pipeline_cache: wgpu::PipelineCache,
}
impl Gpu {
/// Returns a reference to the pipeline cache for pipeline creation.
pub fn pipeline_cache(&self) -> &wgpu::PipelineCache {
return &self.pipeline_cache;
}
/// Export the pipeline cache data for persistence.
pub fn export_pipeline_cache(&self) -> Option<Vec<u8>> {
return self.pipeline_cache.get_data();
}
}
// In crates/lambda-rs-platform/src/wgpu/gpu.rs (GpuBuilder)
impl GpuBuilder {
/// Load pipeline cache data from a previous session.
pub fn with_pipeline_cache_data(mut self, data: &[u8]) -> Self {
self.cache_data = Some(data.to_vec());
return self;
}
}
// In crates/lambda-rs-platform/src/wgpu/pipeline.rs
impl RenderPipelineBuilder<'a> {
pub fn build(
self,
gpu: &Gpu,
vertex_shader: &ShaderModule,
fragment_shader: Option<&ShaderModule>,
) -> RenderPipeline {
// ... existing code ...
let raw =
gpu
.device()
.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
// ... existing fields ...
cache: Some(gpu.pipeline_cache()), // Use the GPU's pipeline cache
});
// ...
}
}Example Usage:
// Load cache from disk at startup
let cache_data = std::fs::read("pipeline_cache.bin").ok();
let mut gpu_builder = GpuBuilder::new();
if let Some(data) = &cache_data {
gpu_builder = gpu_builder.with_pipeline_cache_data(data);
}
let gpu = gpu_builder.build(&surface)?;
// ... create pipelines (automatically uses cache) ...
// Save cache to disk at shutdown
if let Some(data) = gpu.export_pipeline_cache() {
std::fs::write("pipeline_cache.bin", data).ok();
}Acceptance Criteria
-
Gpustruct contains awgpu::PipelineCache - Cache is created during
Gpuinitialization -
RenderPipelineBuilder::build()passes the cache tocreate_render_pipeline -
GpuBuilder::with_pipeline_cache_data()method for loading persisted cache -
Gpu::export_pipeline_cache()method for saving cache data - Documentation explaining pipeline cache benefits and persistence
- Example demonstrating cache persistence (optional, nice-to-have)
Affected Crates
lambda-rs-platform
Notes
- Pipeline cache effectiveness varies by backend (Vulkan benefits most)
- Cache data is backend-specific and should not be shared across different backends
- Future work: compute pipeline caching, automatic cache file management
- Consider versioning cache files to handle shader/wgpu updates
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request