Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Display pool memory leak #87

Open
DGriffin91 opened this issue Sep 18, 2024 · 1 comment
Open

Display pool memory leak #87

DGriffin91 opened this issue Sep 18, 2024 · 1 comment

Comments

@DGriffin91
Copy link
Contributor

It seems like there is some kind of leak with the display pool. I haven't made a minimum reproducible example yet as I'm not sure exactly what's causing it yet. It still happens if I disable almost all of my passes except the main opaque pass, that's the furthest I've reduced the problem to so far. It seems to scale with the number of sub passes (leaks faster). It seems that it also causes this

cmd_buf.device.cmd_begin_render_pass(
to take more time each frame. It seems like cmd_begin_render_pass is taking longer each frame for every pass as well, not just the opaque one (like the blit to swap will also take longer). Eventually it can take over 1ms (at the start it takes around 2us).

Using https://crates.io/crates/leak-detect-allocator I found 4 additional instances of this showing up each frame:

leak memory address: 0x1e4d0f06a40, size: 1088
    0x7ff785c43597, backtrace::backtrace::dbghelp64::tracebacktrace::backtrace::trace_unsynchronized<leak_detect_allocator::impl$0::alloc_accounting::closure_env$0<10> >
    0x7ff785c49d40, leak_detect_allocator::LeakTracer<10>::alloc_accountingleak_detect_allocator::impl$1::alloc<10>
    0x7ff785f950d3, alloc::alloc::allocalloc::alloc::Global::alloc_implalloc::alloc::impl$1::allocatealloc::raw_vec::RawVec<screen_13::driver::render_pass::SubpassInfo,alloc::alloc::Global>::try_allocate_inalloc::raw_vec::RawVec<screen_13::driver::render_pass::SubpassInfo,alloc::alloc::Global>::with_capacity_inalloc::vec::Vec<screen_13::driver::render_pass::SubpassInfo,alloc::alloc::Global>::with_capacity_inalloc::vec::Vec<screen_13::driver::render_pass::SubpassInfo,alloc::alloc::Global>::with_capacity
    0x7ff785f9c843, screen_13::graph::resolver::Resolver::record_scheduled_passes<dyn$<screen_13::display::ResolverPool> >
    0x7ff785fa4770, screen_13::graph::resolver::impl$2::record_node_passes::closure$0std::thread::local::impl$6::with_borrow_mut::closure$0std::thread::local::LocalKey<core::cell::RefCell<screen_13::graph::resolver::Schedule> >::try_withstd::thread::local::LocalKey<core::cell::RefCell<screen_13::graph::resolver::Schedule> >::with
    0x7ff785f676dc, screen_13::display::Display::resolve_image
    0x7ff785f17d15, bs13_core::resolve_and_submitbs13_core::s13_send_render_state_and_wait
    0x7ff785f27015, core::ops::function::FnMut::call_mutcore::ops::function::impls::impl$3::call_mutbevy_ecs::system::function_system::impl$26::run::call_innerbevy_ecs::system::function_system::impl$26::runbevy_ecs::system::function_system::impl$7::run_unsafe<void (*)(bevy_ecs::change_detection::NonSendMut<bs13_core::S13RenderGraph>,bevy_ecs::change_detection::Res<bs13_core::BlitViewTarget>,bevy_ecs::event::EventReader<bevy_window::event::WindowResized>,bev
    0x7ff7884d760d, bevy_ecs::schedule::executor::__rust_begin_short_backtrace::run_unsafe
    0x7ff7884e4df9, bevy_ecs::schedule::executor::multi_threaded::impl$5::spawn_system_task::async_block$0::closure$0core::ops::function::FnOnce::call_oncecore::panic::unwind_safe::impl$25::call_oncestd::panicking::try::do_callstd::panicking::trystd::panic::catch_unwindbevy_ecs::schedule::executor::multi_threaded::impl$5::spawn_system_task::async_block$0core::panic::unwind_safe::impl$28::pollfutures_lite::future::impl$9::poll::closure$0core::panic::unwind_safe::impl$25::call_oncestd::panicking::try::do_call

I noticed that if I did an early submit before the swap, both the memory leak and the additional time spent per frame issue went away.
graph.resolve().submit(&mut LazyPool::new(&render_state.device), 0, 0).unwrap();

Then I tried just resetting the display pool before display.resolve_image and that also seemed to resolve the issue:
display.pool = Box::new(HashPool::new(&render_state.device));

Resizing the window also seems to reset the issue with the additional time spent each frame, but doesn't seem to free the leaked memory.

DGriffin91 added a commit to DGriffin91/screen-13 that referenced this issue Sep 18, 2024
@DGriffin91
Copy link
Contributor Author

It seems to maybe be related to merging passes. If I always return false from

fn allow_merge_passes(lhs: &Pass, rhs: &Pass) -> bool {
this issues seems not to occur.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant