-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory leak #149
Comments
Close your publishers and consumers when you're done with them and do not attempt to reuse them. Only close the connection itself once you've closed all associated publishers and consumers. The issue has been resolved through conn.close.But it's not possible to repeat newConn.I did not reuse the publisher, but I reused Conn. When the publisher closed, startNotifyBlockedHandler was still blocking, so I had to reuse the publisher |
tough, that seems to be a leaky goroutine :-( indeed, the design does not cater for its termination. Short of moving things around [ e.g. a) provide an additional control channel like a context triggering on publisher close events or b) making the blocked chan parameter part of the publisher struct -- so it closes automatically when publisher is GC + changing the range blockings into a for/select also listening to this additional channel in addition to |
It seems to me that the most likely issue is that you have a lot of Publish() calls getting blocked. This function: You could add a context timeout, and that should stop them from hanging forever. |
Hello, this seems still to be an issue. I could reproduce this goroutine leak by using the example of the readme and printing the stacktrace before and after closing the publisher. The goroutine started at startup will not terminate with closing the publisher. The problem is the goroutine is started and passes a notify But a long lived service will stay connected to the rabbitmq therefore hold the Contrary to the startNotifyFlowHandler which is also started on the creation of a publisher its I have two ideas for a fix:
I could implement the first. The second option would need discussion and implementation in Take care! |
once.Do rabbitmq.NewPublisher;Reusing this connection has not caused any problems after large-scale high-concurrency testing |
For me this is not a fix of the problem. This does not change the fact that even initializing one publisher and closing it will not stop all started goroutines from this publisher thus resulting in a leaking goroutine. |
Our business is relatively simple, with only one configuration. We can use GetPublisher globally, which has withstood the test of high concurrency. If you have multiple configurations, my suggestion is to put different configurations of Publishers into the Publisher pool to reuse the same configuration of Publishers |
Ah thank you for your example @lmb1113 . We have a different use case, we open a lot of publishers because each client gets its own queue and exchange for incoming messages |
Hello, dont want to necro bump but @wagslane should i open a new issue for this? Or can this issue be reopened? |
Sorry I've been crazy busy at work. I want to look at this when I have some time just haven't had a chance yet |
Hello, I noticed that each publisher has created the startNotifyBlockedHandler function. Can we adjust it to use the same function for each conn? My understanding is that the function of startNotifyBlockedHandler is to prevent publishing if the connection is not available |
publisher.startNotifyBlockedHandler()->conn.startNotifyBlockedHandler()? |
We also met this issue. We don't reuse publishers or consumers, but when getting a lot of messages to publish we receive OOMs with ~2-3gb memory usage when regular memory usage is about 100-200mb. Hope you can fix it. |
Suggest reuse publishers or consumers |
@lmb1113 we tried that approach but getting a lot of errors like here: |
Reuse connections and reuse publishers or consumers?
@hotrush Reuse connections and reuse publishers or consumers? |
@lmb1113 connection reused always. Publisher reused - connection/reconnection issues, publisher not reused - memory leaks. Smth like that |
@mjurczik So all resources are not necessarily expected to be cleaned up when you close the publisher, you would need to also close the underlying connection itself. When you close both are you still seeing a "leak"? (And if that's not a suitable use case you might want to just drop down to the amqp lib) |
Hello,
Imo this should not be expected. A connection should be reusable, removing a publisher and have them free their memory along side their started routines should not force you to close the connection aswell.
Sure if i shutdown my programme and gracefully close all publisher and afterwards close the connection the routines will be stopped gracefully. That happens because on conn.Close() function shutdown closes the channels blocking the Closing the connection seems not to be a suitable fix: Lets say we have 3 Publishers, and we want to close one of them because it is not needed anymore. I thought about my "fix" with the lifetime, the problem is this would finish the routine and allow the publisher + routine to be garbage collected but the chan whould still be hold in the Connection If you like i can implement this, create a PR and you can still decide if this does make sense for you :) |
Yeah this makes sense to me @mjurczik ! If we have 3 publishers on a single connection, we should be able to close 1 publisher, keep the connection alive for the other 2, and release any publisher specific resources. If you'd like to propose a PR I will take a look when I have time |
- closing publisher did not close all associated goroutines. The routine listening for block events of connection only go terminated on closing the connection. The wrapping connection manager holds all blocking channels and only passes down an universal blocking channel. If a signal reaches this channel it is broadcasted to all blocking channels. This allows telling the connection manager to remove and close the channel if publisher is closed. Closes wagslane#149
- closing publisher did not close all associated goroutines. The routine listening for block events of connection only got terminated on closing the connection. The wrapping connection manager holds all blocking channels and only passes down an universal blocking channel now. If a signal reaches this channel it is broadcasted to all blocking channels. This allows telling the connection manager to remove and close the channel if publisher is closed. Closes wagslane#149
Hey @wagslane, sorry for the hold up. Just pushed my fixed, hope this helps. |
My English is not very good, please understand.As the concurrency of my project increases, the memory also increases. After investigation, it was found that the reason was due to the publisher.startNotifyBlockedHandler() method's failure to exit and release memory in a timely manner. I am not sure what happened that caused the failure to exit. My expected outcome is publisher Close() can close normally.
After commenting out this line of code, I found it to be effective, but it is not elegant.
The text was updated successfully, but these errors were encountered: