-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH update Scene Segmentation Tutorial using Proglearn on ISIC #542
base: staging
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@amyvanee I fixed the issue by updating the CircleCI
cache version. In the future you can deal with such issues in this way as well.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## staging #542 +/- ##
===========================================
- Coverage 90.09% 89.46% -0.64%
===========================================
Files 7 7
Lines 404 408 +4
===========================================
+ Hits 364 365 +1
- Misses 40 43 +3 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you able to show the same type of annotation images on ProgLearn
models?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. @jdey4 How do you think?
@amyvanee there is no plot for the multitask experiment. We cannot know whether it is working or not. Please see other experiments in the docs to find out how you could display the results. |
The multitask plot looks great! But I would remove fractional task number from x axis ticks. One thing is confusing to me: the accuracy curves are basically flat. I do not understand how backward learning coefficients can even change or improve. There is definitely a bug! Again, the scene segmenter is trained only on 1 image by extracting features that essentially flattens it. It is weird as normally people do not train on only 1 image ignoring great of the vast leftover training images. To me it seems, the greyscale conversion renders it to a black-white image and the task is only to detect a black or white pixel which is really an easy task. I am sure this is not a trivial approach and this will not solve segmentation problem in general for all datasets. Perhaps @jovo can enlighten us. |
Thank you for reviewing this! To address each of your notes,
Thank you for the suggestion, I have made the update!
The graphs appear flat, but they are actually not horizontal. For example, the top graph is [0.95633099, 0.95661165, 0.95636033, 0.95565486, 0.95633099], and so the values are not staying constant, which explains the other plots as well. I would like to also add that I am using the same approach as the rest of my team for training and metrics analysis, to whom I do give credit.
I actually took this approach from the tutorial of Scikit-Image. They used only one image, and their approach seems valid. They do train using sections of this one image, but I decided to provide the model the entire image as training data since there is one lesion in the center, and I then tested on another entire new image. Furthermore, after working on understanding their source code better, I found that this technique of flattening was used by Scikit-Image as well. Lastly, I do not convert the original image to greyscale, only the annotations, just to make it easier to handle -- there were already only two labels, so conversion to greyscale did not result in any loss of information and the input stayed unchanged with 3 color channels. I know that a large concern with getting this PR merged was whether or not this method is valid, but I do think it is because a reputable package like Scikit-Image uses this very technique.
Thank you @jovo, and thank you @jdey4 for your time reviewing and providing feedback as well! I really appreciate it. |
Firs of all, this approach : https://github.com/KhelmholtzR/ProgLearn/blob/af84f50f4a8759104ded06891acac884b81e3821/docs/experiments/isic_proglearn_nn.ipynb is not same as yours. They used the whole data and used a multi-label voter. I told you earlier this thing. I think I am done trying to help this PR. According to me, this approach is not correct and generalized enough to address any segmentation problem. As @jovo will be able to understand it better, he will be better in this case to help and merge the PR. |
@jdey4 Sorry for not being clear, but I meant that I am referencing the experimental part, not the implementation of the model itself. I know you told me earlier I was wrong to reference them about flattening the images, and thank you for pointing that out to me, but here I am only looking at their code for the process of adding tasks and calculating the accuracies, FTE, BTE, and TE to make the graphs, not at all to anything else. I understand that they are doing a neural network which is totally different in the model's implementation, but the idea of adding tasks to the model and then gathering accuracies and creating the graphs should be the same, unless I am mistaken. Sorry if what I said was confusing, but just to clarify, I referenced Scikit-Image to figure out how to use a progressive learning random forest model to perform scene segmentation, and I only referenced Kevin's notebook to analyze the results (accuracy, etc.) of this model. I am really sorry for causing such an inconvenience, but I would appreciate it if you could look more at the Scikit-Image implementation, which I learned from to develop this PR, thank you. |
Reference issue
Type of change
What does this implement/fix?
Additional information
N/A