Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support running recognize in a separate container (with GPU support) #1061

Closed
SilleBille opened this issue Dec 25, 2023 · 10 comments
Closed
Labels
enhancement New feature or request

Comments

@SilleBille
Copy link

SilleBille commented Dec 25, 2023

Describe the feature you'd like to request

With the nextcloud/all-in-one container that spawns multiple containers, I'd like to run recognize app in a separate container, that can be used by the nextcloud-aio-nextcloud container. This will let the user provide GPU capabilities to the docker without having to provide that functionality to the whole NC eco-system. It also provides more portability and isolation.

Describe the solution you'd like

A docker container that is exposed through a specific port so that the main nextcloud-aio-nextcloud container can interact with it through the same network nextcloud-aio bridge network.

Describe alternatives you've considered

A docker container that shares all the existing volume and mounts used by nextcloud-aio-nextcloud container in ro format. Completely trying to build a container with CUDA + cuDNN and implement this solution. (I'll keep this issue updated based on the results)

@SilleBille SilleBille added the enhancement New feature or request label Dec 25, 2023
Copy link

Hello 👋

Thank you for taking the time to open this issue with recognize. I know it's frustrating when software
causes problems. You have made the right choice to come here and open an issue to make sure your problem gets looked at
and if possible solved.
I try to answer all issues and if possible fix all bugs here, but it sometimes takes a while until I get to it.
Until then, please be patient.
Note also that GitHub is a place where people meet to make software better together. Nobody here is under any obligation
to help you, solve your problems or deliver on any expectations or demands you may have, but if enough people come together we can
collaborate to make this software better. For everyone.
Thus, if you can, you could also look at other issues to see whether you can help other people with your knowledge
and experience. If you have coding experience it would also be awesome if you could step up to dive into the code and
try to fix the odd bug yourself. Everyone will be thankful for extra helping hands!
One last word: If you feel, at any point, like you need to vent, this is not the place for it; you can go to the forum,
to twitter or somewhere else. But this is a technical issue tracker, so please make sure to
focus on the tech and keep your opinions to yourself. (Also see our Code of Conduct. Really.)

I look forward to working with you on this issue
Cheers 💙

@marcelklehr
Copy link
Member

Hi @SilleBille

Nextcloud GmbH is planning to make this happen soon-ish :)

@oliverhu
Copy link

oliverhu commented Jan 7, 2024

+1 upvote to this... especially the integration with aio.

@Passaita
Copy link

That wold be a good way to improve nextcloud ecosystem and allow users to install containers that are required.

@danieloppenlander
Copy link

danieloppenlander commented Jan 18, 2024

Any updates on this? Would be great to have the video tagging and performance benefits. I might have a little dev time to contribute if someone could point me in the right direction.

@bugsyb
Copy link

bugsyb commented Apr 8, 2024

If you're in instant need, there's this mod I've done:
https://github.com/bugsyb/recognize_docker/

The way I have it set up is very close if not exactly that what requestor raised - in my case it runs on completely separate system, uses FS via NFS to access files and stores all data in shared Postgres DB.

Use the nvidia-tensor-based if you're fine with some mappings being done there.

@marcelklehr
Copy link
Member

Nextcloud GmbH is planning to make this happen soon-ish

Sorry to say, the plans have been scrapped due to lack of engineering time so far. It's still on our list of things that would be nice to have, but it's not scheduled any time soon for now :/

@rowanmoul
Copy link

rowanmoul commented Apr 16, 2024

the plans have been scrapped due to lack of engineering time

Are you open to community contributions on this one? If so, it would helpful to have an outline of how you intended to implement it, if one exists.

I'm not sure I will personally have the time, but perhaps someone else does.

@marcelklehr
Copy link
Member

I'd be open to community contributions on this.

my rough plan would be not to deviate too much from how the models are run right now. Instead of the Classifier class executing node.js directly, there would be an option in the settings to call out to the recognize External App instead, or perhaps the external app could be auto-detected. The external app would do the same thing as the Classifier class, execute node.js and return the json line results, so they can be processed in the original recognize app. These are the current docs on how App API / External Apps work: https://cloud-py-api.github.io/app_api/index.html

@marcelklehr
Copy link
Member

I'm closing this in favor of #73 which is basically the same thing. Upvote there, to make it more likely I get to work on this :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Done
Development

No branches or pull requests

7 participants