Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed of Inference on GeForce GTX 1080 #61

Open
tralfamadude opened this issue Nov 18, 2020 · 1 comment
Open

Speed of Inference on GeForce GTX 1080 #61

tralfamadude opened this issue Nov 18, 2020 · 1 comment

Comments

@tralfamadude
Copy link

My testing based on a variation of demo.py for classification of 7 labels/classes is showing choppy performance on a GPU. Excluding python post-processing and ignoring the first two inferences, I see processing durations like 0.09, 0.089, 0.56, 0.56, 0.079, 0.39, 0.09 ... ; average over 19 images is 0.19sec per image.

I'm surprised by the variance.

At 5/sec it is workable, but could be better. Would tensorflow-serving help by getting python out of the loop? I need to process 1M images per day.

(The GPU is GeForce GTX 1080 and is using 10.8GB of 11GB RAM, only one TF session is used for multiple inferences.)

@SeguinBe
Copy link
Collaborator

One of the trade-off for accuracy based on using such a large underlying model (Resnet50) is the processing speed. It is unlikely that tensorflow-serving would really improve it, as most of the computation is GPU already anyway.

The alternative is to use a smaller backbone pretrained network, which seems present in the more recent rewriting in PyTorch https://github.com/dhlab-epfl/dhSegment-torch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants