Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix leak in write_nrnthread. #2456

Merged
merged 3 commits into from
Aug 21, 2023
Merged

Fix leak in write_nrnthread. #2456

merged 3 commits into from
Aug 21, 2023

Conversation

1uc
Copy link
Collaborator

@1uc 1uc commented Aug 10, 2023

Running a memory profiler on simulations using neurodamus to create the input files for coreneuron in multiple cycles showed that memory allocated in:

accumulated across cycles. The function is called from write_nrnthread, with the following context:

        double* data = NULL;
        nrnthread_dat2_mech(..., data, ...);

        if (nrn_is_artificial_[type]) {
            delete[] data;
        }

looking into nrnthread_dat2_mech we find the following structure:

int nrnthread_dat2_mech(..., double*& data, ...) {
    bool copy = data ? true : false;

    // As the NEURON data is now transposed then for now always create a new
    // copy in the format expected by CoreNEURON -- TODO make sure this is
    // freed, then later TODO remove the need for this entirely
    if (!copy) {
        data = new double[n * sz];
    }
    // Don't give away/store `data` anywhere.

In summary when called from write_nrnthread:

  1. The argument data passed to nrnthread_dat2_mech is unconditionally NULL.
  2. Memory will be allocate to data unconditionally.
  3. Memory is only released when nrn_is_artificial_[type] which seems to only sometimes hold.

The immediate fix is to simply release the memory since it doesn't seem to be used afterwards. Profiling the same run shows the expected reduced memory consumption.

@azure-pipelines
Copy link

✔️ cd3c5fb -> Azure artifacts URL

@codecov
Copy link

codecov bot commented Aug 10, 2023

Codecov Report

Merging #2456 (932cd32) into master (efc28fe) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master    #2456   +/-   ##
=======================================
  Coverage   60.41%   60.42%           
=======================================
  Files         626      626           
  Lines      120812   120799   -13     
=======================================
+ Hits        72990    72992    +2     
+ Misses      47822    47807   -15     
Files Changed Coverage Δ
...rniv/nrncore_write/callbacks/nrncore_callbacks.cpp 93.38% <ø> (ø)
src/nrniv/nrncore_write/io/nrncore_io.h 100.00% <ø> (ø)
src/nrniv/nrncore_write/io/nrncore_io.cpp 76.76% <100.00%> (+3.35%) ⬆️

... and 2 files with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@bbpbuildbot

This comment has been minimized.

Copy link
Member

@ohm314 ohm314 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great find, @1uc ! The fix looks good to me, but I prefer if also someone else looks at it.

@1uc 1uc force-pushed the 1uc/fix-write_nrnthread-leak branch from cd3c5fb to 0e7d4a2 Compare August 18, 2023 18:51
@1uc 1uc marked this pull request as ready for review August 18, 2023 18:51
@azure-pipelines
Copy link

✔️ 0e7d4a2 -> Azure artifacts URL

@bbpbuildbot

This comment has been minimized.

@bbpbuildbot

This comment has been minimized.

@azure-pipelines
Copy link

✔️ 932cd32 -> Azure artifacts URL

@olupton
Copy link
Collaborator

olupton commented Aug 21, 2023

Just to provide some more explicit explanation, before #2027 we had this logic:

if (isart) { // data may not be contiguous
data1 = contiguous_art_data(ml->_data, n, sz); // delete after use
nodeindices = NULL;
} else {
nodeindices = ml->nodeindices; // allocated below if copy
data1 = ml->_data[0]; // do not delete after use
}

where contiguous_art_data allocated new storage that needed to be freed later:
double* contiguous_art_data(Memb_list* ml, int nitem, int szitem) {
double* d1 = new double[nitem * szitem];
int k = 0;
for (int i = 0; i < nitem; ++i) {
for (int j = 0; j < szitem; ++j) {
d1[k++] = ml->data(i, j);
}
}
return d1;
}

whereas in master we treat artificial and non-artificial cells uniformly:
// As the NEURON data is now transposed then for now always create a new
// copy in the format expected by CoreNEURON -- TODO make sure this is
// freed, then later TODO remove the need for this entirely
if (!copy) {
data = new double[n * sz];
}

@olupton olupton merged commit 67f7347 into master Aug 21, 2023
33 checks passed
@olupton olupton deleted the 1uc/fix-write_nrnthread-leak branch August 21, 2023 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants