-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed the tf issue #74564 #2322
fixed the tf issue #74564 #2322
Conversation
I looked into the issue and found out that after the evaluation we are getting a list of size 4 where only the last one is the accuracy of the export_model. So ignored all the other output to only get the required loss and accuracy
PreviewPreview and run these notebook edits with Google Colab: Rendered notebook diffs available on ReviewNB.com.Format and styleUse the TensorFlow docs notebook tools to format for consistent source diffs and lint for style:$ python3 -m pip install -U --user git+https://github.com/tensorflow/docsIf commits are added to the pull request, synchronize your local branch: git pull origin fix-tf-issue-74564
|
30ab93d
to
70c03a9
Compare
Use `return_dict=True` for evaluate.
Thanks for the fix! I actually always prefer |
Thank you, Mark! I'll keep that in mind and make sure to use return_dict=True moving forward. |
I looked into the issue and found out that after the evaluation we are getting a list of size 4 where only the last one is the accuracy of the export_model. So ignored all the other output to only get the required loss and accuracy