Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Block shutdown until all pending messages are sent (Fixes #27). #28

Closed

Conversation

kingosticks
Copy link

My first attempt didn't include a new sync_channel and I had expected that to be enough to make the Tokio runtime within the "mdns-responder" thread keep polling the futures until they all returned Poll::Ready(()), after first sending everything (specifically the goodbye messages) sitting in their outgoing queue. But that didn't work. It's as if the block_on call didn't actually block and prevent the thread from stopping before we are finished... I would very much like to hear from anyone who could explain why that is happening, I do not have a good grasp of Tokio, let alone Futures, or even Rust...

I would not be surprised to learn there's a far nicer way of doing this, but after much frustration this is where I ended up! Feel free to scrap this and implement a nicer fix.

…rg#27).

Ideally the task within our "mdns-responder" thread would block
until all the futures actually return Poll::Ready but I can't get
that to work, hence the explicit extra blocking I have added.
@kingosticks
Copy link
Author

Turns out it's really obvious why the task isn't polling to completion when you look again with fresh eyes. Obviously we need to join the thread we created for our runtime to live in.

However, I got bored fighting the compiler and since I couldn't see why each Service had its own reference to the Shutdown I just got rid of that and made things simple. I imagine there is actually a good reason why it was that way but I'll need some help overcoming the compiler.

@willstott101
Copy link
Contributor

Thanks - it looks like the Arc around the Shutdown in Responder and Service was intended to make sure the thread is kept alive if either the Service or Responder is kept around. Updating the example to (the slightly ugly):

    let _svc = {
        let responder = libmdns::Responder::new().unwrap();
        responder.register(
            "_http._tcp".to_owned(),
            "libmdns Web Server".to_owned(),
            80,
            &["path=/"],
        )
    }; // Responder is dropped as we leave this block... so the thread is shutdown immediately.

makes the responder immediately stop responding.

I might come up with a solution soon-ish, this library needs a good refactor to use async/await tbh and that'd likely change things a lot.

@kingosticks
Copy link
Author

Ahh I see, makes sense. Thank you for that example. Please feel free to close this as you see fit, I enjoyed messing around with this but I found it hard work so I'll leave it in your hands.

@roderickvd
Copy link
Member

To bump this one, the current (pre-PR) behavior causes librespot to linger around on the Connect tab on an ungraceful exit.

@willstott101 willstott101 deleted the branch librespot-org:master May 8, 2022 14:28
@willstott101
Copy link
Contributor

willstott101 commented May 8, 2022

Auto-closed as I renamed master -> main. A solution to #27 would be very welcome, but as a new PR I guess. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants