-
Notifications
You must be signed in to change notification settings - Fork 22
Description
It seems like there is some kind of leak with the display pool. I haven't made a minimum reproducible example yet as I'm not sure exactly what's causing it yet. It still happens if I disable almost all of my passes except the main opaque pass, that's the furthest I've reduced the problem to so far. It seems to scale with the number of sub passes (leaks faster). It seems that it also causes this
screen-13/src/graph/resolver.rs
Line 614 in bdc4a81
| cmd_buf.device.cmd_begin_render_pass( |
cmd_begin_render_pass is taking longer each frame for every pass as well, not just the opaque one (like the blit to swap will also take longer). Eventually it can take over 1ms (at the start it takes around 2us).
Using https://crates.io/crates/leak-detect-allocator I found 4 additional instances of this showing up each frame:
leak memory address: 0x1e4d0f06a40, size: 1088
0x7ff785c43597, backtrace::backtrace::dbghelp64::tracebacktrace::backtrace::trace_unsynchronized<leak_detect_allocator::impl$0::alloc_accounting::closure_env$0<10> >
0x7ff785c49d40, leak_detect_allocator::LeakTracer<10>::alloc_accountingleak_detect_allocator::impl$1::alloc<10>
0x7ff785f950d3, alloc::alloc::allocalloc::alloc::Global::alloc_implalloc::alloc::impl$1::allocatealloc::raw_vec::RawVec<screen_13::driver::render_pass::SubpassInfo,alloc::alloc::Global>::try_allocate_inalloc::raw_vec::RawVec<screen_13::driver::render_pass::SubpassInfo,alloc::alloc::Global>::with_capacity_inalloc::vec::Vec<screen_13::driver::render_pass::SubpassInfo,alloc::alloc::Global>::with_capacity_inalloc::vec::Vec<screen_13::driver::render_pass::SubpassInfo,alloc::alloc::Global>::with_capacity
0x7ff785f9c843, screen_13::graph::resolver::Resolver::record_scheduled_passes<dyn$<screen_13::display::ResolverPool> >
0x7ff785fa4770, screen_13::graph::resolver::impl$2::record_node_passes::closure$0std::thread::local::impl$6::with_borrow_mut::closure$0std::thread::local::LocalKey<core::cell::RefCell<screen_13::graph::resolver::Schedule> >::try_withstd::thread::local::LocalKey<core::cell::RefCell<screen_13::graph::resolver::Schedule> >::with
0x7ff785f676dc, screen_13::display::Display::resolve_image
0x7ff785f17d15, bs13_core::resolve_and_submitbs13_core::s13_send_render_state_and_wait
0x7ff785f27015, core::ops::function::FnMut::call_mutcore::ops::function::impls::impl$3::call_mutbevy_ecs::system::function_system::impl$26::run::call_innerbevy_ecs::system::function_system::impl$26::runbevy_ecs::system::function_system::impl$7::run_unsafe<void (*)(bevy_ecs::change_detection::NonSendMut<bs13_core::S13RenderGraph>,bevy_ecs::change_detection::Res<bs13_core::BlitViewTarget>,bevy_ecs::event::EventReader<bevy_window::event::WindowResized>,bev
0x7ff7884d760d, bevy_ecs::schedule::executor::__rust_begin_short_backtrace::run_unsafe
0x7ff7884e4df9, bevy_ecs::schedule::executor::multi_threaded::impl$5::spawn_system_task::async_block$0::closure$0core::ops::function::FnOnce::call_oncecore::panic::unwind_safe::impl$25::call_oncestd::panicking::try::do_callstd::panicking::trystd::panic::catch_unwindbevy_ecs::schedule::executor::multi_threaded::impl$5::spawn_system_task::async_block$0core::panic::unwind_safe::impl$28::pollfutures_lite::future::impl$9::poll::closure$0core::panic::unwind_safe::impl$25::call_oncestd::panicking::try::do_call
I noticed that if I did an early submit before the swap, both the memory leak and the additional time spent per frame issue went away.
graph.resolve().submit(&mut LazyPool::new(&render_state.device), 0, 0).unwrap();
Then I tried just resetting the display pool before display.resolve_image and that also seemed to resolve the issue:
display.pool = Box::new(HashPool::new(&render_state.device));
Resizing the window also seems to reset the issue with the additional time spent each frame, but doesn't seem to free the leaked memory.