Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[0609 15:27:39 @base.py:252] Epoch 14 (global_step 10934) finished, time:59.7 seconds. #18

Open
meidachen opened this issue Jun 9, 2018 · 2 comments

Comments

@meidachen
Copy link

I'm running with dual 1080Ti and get 1 min for each epoch, using same parameters you used. Is this a reasonable running time?

And how should I use my own data for training and testing?

Thank you in advance!!!

@Sirius083
Copy link

I have the same gpu on windows and run for densenet(L=40, k=12), the total training time is 5h10minutes.
(without imgaug.MapImage(lambda x: x - pp_mean) and PrefetchData)
But other denseNet implementation cannot get the same accuracy as the original paper reported.

@Sirius083
Copy link

I have the same question one this issue. Since resnet-32 on cifar10 takes 1/3 of training time compared with densenet. Do you know the problem? thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants