-
Notifications
You must be signed in to change notification settings - Fork 598
Increased stream manager memory usage #1567
Comments
@ajorgensen How much does the following one-line fix help?
I still can't reproduce it locally with AckingTopology. |
I can try tomorrow. Ill also try to work out a reproducible test case for you if I can. |
Just in case, make sure you didn't change the default config for tuple cache:
|
No we have both of those still at the default. @congwang what would happen if the spout could produce tuples faster than the bolt could consume them and the |
The stream manager will send back pressure to the spout in this case. |
@congwang were you able to reproduce the issue with the topology I sent you? |
@ajorgensen I am still trying figure out how to build your topology |
Oh ok. You should be able to create a simple pom file and build it with maven. Let me see if i can put one together for you. |
@congwang sorry about that. I've emailed you the same project but with a working pom.xml file now. Let me know if you have any trouble building the topology |
I guess we need the following fix:
|
Seems simple enough. Did you give it a try with the test topology? If you
put together a patch I can also apply it to our internal build and try it
out on a production topology to verify.
Andrew Jorgensen
@ajorgensen
…On Wed, Jan 4, 2017 at 4:58 PM, Cong Wang ***@***.***> wrote:
I guess we need the following fix:
diff --git a/heron/common/src/cpp/network/connection.cpp b/heron/common/src/cpp/network/connection.cpp
index c03ea8d..90cfbf3 100644
--- a/heron/common/src/cpp/network/connection.cpp
+++ b/heron/common/src/cpp/network/connection.cpp
@@ -240,6 +240,8 @@ void Connection::handleDataWritten() {
sp_int32 Connection::readFromEndPoint(sp_int32 fd) {
sp_int32 bytesRead = 0;
+ if (mUnderBackPressure)
+ return 0;
while (1) {
sp_int32 read_status = mIncomingPacket->Read(fd);
if (read_status == 0) {
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1567 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAfHU-PWnkSiEEANha_Ajd622lUzf7pCks5rPBX1gaJpZM4KzOop>
.
|
@ajorgensen I am trying it, definitely not sure yet, when back pressure happens, spout should already stop sending data, only the pending data is still being transmitted. At least it could help us to rule out the case. |
@congwang has informed @kramasamy that this is not a issue that could be resolved soon. Delay this to 0.15.0. |
We've observed what appears to be the stream manager not releasing memory properly or holding onto memory indefinitely on heron
0.14.4
. The following image is the rss of the heron stream manager, the red vertical line represents the same topology deployed on 0.14.4 on the left of the red line and 0.14.1 on the right of the red line. We can see that on 0.14.4 the rss for the heron stream manager grows until it pushed the container above its allocated memory and then forcibly gets killed while the same topology running on 0.14.1 shows a fairly consistent memory usage.I do not currently have a test case setup that demonstrates this behavior but will work on getting one setup. It appears to only happen on topologies that utilize acking and we have not seen this behavior on those that do not.
The text was updated successfully, but these errors were encountered: