You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Memory leak occurs when sending eager0 messages in pt2pt for sstmacro standalone model
1 - Detailed description of problem or enhancement
When calling MPI_Send and MPI_Recv to send and receive messages point-to-point, if the protocol for sending messages uses the eager0 protocol. Then a memory leak will occur.
The reason for preliminary analysis is that when sending a message, Eager0::start() new the memory of smsg_buffer_ by calling fillSendBuffer(). When executing NetworkMessage::putOnWire(), a wire_buffer_ is newed, and copy smsg_buffer_ to it, but code does not free the smsg_buffer_ and directly sets smsg_buffer_ to nullptr, at this time, the previously newed memory will not have a pointer to manage, causing a memory leak. The memory released in the NetworkMessage destructor is actually the memory of wire_buffer_.
In addition, the memory management of this part (network_message.cc, eager0.cc) is a bit confusing to me. There are too many new and delete, so that the memory management rights are lost. If so, can this part be structurally optimized?
The heaptrack tools some analyse logs are following:
PEAK MEMORY CONSUMERS
1.64G peak memory consumed over 200000 calls from
sumi::MpiProtocol::fillSendBuffer(int, void*, sumi::MpiType*)
at ../../sumi-mpi/mpi_protocol/mpi_protocol.cc:62
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
819.10M consumed over 100000 calls from:
sumi::Eager0::start(void*, int, int, int, int, sumi::MpiType*, int, long, int, sumi::MpiRequest*)
at ../../sumi-mpi/mpi_protocol/eager0.cc:71
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sumi::MpiQueue::send(sumi::MpiRequest*, int, unsigned short, int, int, sumi::MpiComm*, void*)
at ../../sumi-mpi/mpi_queue/mpi_queue.cc:197
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sumi::MpiApi::send(void const*, int, unsigned short, int, int, long)
at ../../sumi-mpi/mpi_api_send_recv.cc:81
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sstmac_send
at ../../sumi-mpi/sstmac_mpi.cc:89
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
userSkeletonMain(int, char**)
in ./osu_latency
sstmac::sw::App::run()
at ../../../sstmac/software/process/app.cc:539
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sstmac::sw::Thread::runRoutine(void*)
at ../../../sstmac/software/process/thread.cc:141
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sstmac_make_fcontext
at ../../../sstmac/software/threading/asm/make_x86_64_sysv_elf_gas.S:49
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
819.10M consumed over 100000 calls from:
sumi::Eager0::start(void*, int, int, int, int, sumi::MpiType*, int, long, int, sumi::MpiRequest*)
at ../../sumi-mpi/mpi_protocol/eager0.cc:71
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sumi::MpiQueue::send(sumi::MpiRequest*, int, unsigned short, int, int, sumi::MpiComm*, void*)
at ../../sumi-mpi/mpi_queue/mpi_queue.cc:197
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sumi::MpiApi::send(void const*, int, unsigned short, int, int, long)
at ../../sumi-mpi/mpi_api_send_recv.cc:81
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sstmac_send
at ../../sumi-mpi/sstmac_mpi.cc:89
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
userSkeletonMain(int, char**)
in ./osu_latency
sstmac::sw::App::run()
at ../../../sstmac/software/process/app.cc:539
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sstmac::sw::Thread::runRoutine(void*)
at ../../../sstmac/software/process/thread.cc:141
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
sstmac_make_fcontext
at ../../../sstmac/software/threading/asm/make_x86_64_sysv_elf_gas.S:49
in /home/WorkSpace/test/SSTMacroBuild/lib/libsstmac.so.12
total runtime: 20.91s.
calls to allocation functions: 9589794 (458710/s)
temporary memory allocations: 6111 (292/s)
peak heap memory consumption: 1.68G
peak RSS (including heaptrack overhead): 1.68G
total memory leaked: 1.64G
suppressed leaks: 10.97K
2 - Describe how to reproduce
git clone https://github.com/sstsimulator/sst-macro.git
./configure --prefix=/home/WorkSpace/test/SSTMacroBuild CFLAGS="-fPIC" CXXFLAGS="-fPIC"
make && make install
cd WorkSpace/test/sst-macro/skeletons/osu-micro-benchmarks-5.3.2
cd mpi/pt2pt
vim osu_latency.c
modify line 87 : for(size = 8191; size > 0; size = 0) {
modify line 99 : for(i = 0; i < 100000; i++) {
modify line 110 : for(i = 0; i < 100000; i++) {
/home/WorkSpce/test/SSTMacroBuild/bin/sst++ -o osu_latency osu_latency.c osu_pt2pt.c -I.
heaptrack /home/WorkSpce/test/SSTMacroBuild/bin/sstmac -f parameter.ini
heaptrack --analyse
If there is no heaptrack environment, you can use other memory leak detection tools, or observe sstmac memory usage through top/htop.
Memory leak occurs when sending eager0 messages in pt2pt for sstmacro standalone model
1 - Detailed description of problem or enhancement
When calling MPI_Send and MPI_Recv to send and receive messages point-to-point, if the protocol for sending messages uses the eager0 protocol. Then a memory leak will occur.
The reason for preliminary analysis is that when sending a message, Eager0::start() new the memory of smsg_buffer_ by calling fillSendBuffer(). When executing NetworkMessage::putOnWire(), a wire_buffer_ is newed, and copy smsg_buffer_ to it, but code does not free the smsg_buffer_ and directly sets smsg_buffer_ to nullptr, at this time, the previously newed memory will not have a pointer to manage, causing a memory leak. The memory released in the NetworkMessage destructor is actually the memory of wire_buffer_.
In addition, the memory management of this part (network_message.cc, eager0.cc) is a bit confusing to me. There are too many new and delete, so that the memory management rights are lost. If so, can this part be structurally optimized?
The heaptrack tools some analyse logs are following:
2 - Describe how to reproduce
If there is no heaptrack environment, you can use other memory leak detection tools, or observe sstmac memory usage through top/htop.
Parameter.ini is the following:
3 - What Operating system(s) and versions
4 - What version of external libraries (Boost, MPI)
5 - Provide sha1 of all relevant sst repositories (sst-core, sst-elements, etc)
SSTMAC repo: c30a5ce
6 - Fill out Labels, Milestones, and Assignee fields as best possible
The text was updated successfully, but these errors were encountered: