-
-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perceptron #759
Perceptron #759
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This exercise feels quite mathy to me, which in general we try to avoid. We'll discuss internally and I'll get back to you.
You might want to reject this PR then, since, if accepted, I was going to ask if you would like me to continue to create exercises related to introductory machine learning. They are all inherently mathy though, and I have seen the vitriol which some "programmers" have for anything of the sort. I should also note, that I've noticed the diminishing importance of the fundamentals of ML which seem "outsourced" to large scale applications that have a user friendly UI. So, at this point, it's almost like trying to teach an engineer how to add. The only thing I would "argue" in favor of taking this on, is that Julia is a language used largely in the scientific area, so people there may very well like the math. I didn't write this in Python for a reason. |
Co-authored-by: Jeremy Walker <[email protected]>
I wouldn't be in favor of having more exercises related to machine learning. A track's goal is to help teach fluency in that language, whereas machine learning is another thing altogether. That's not to say machine learning isn't a very important subject, just that Exercism isn't designed to help teach it. |
No problem, I understand. Thanks for taking a look anyway! |
This is an exercise I've made from scratch. There was already significant review in this PR.
The last sticking point, if I remember correctly, was that @cmcaine wanted to include testing of "unseen points", whereas I thought this could be confusing since it appears to conflate two types of algorithm testing:
To test for the correctness of the algorithm, which is the intention of the exercise, a maximum of four points (i.e. the "support vectors") need to be checked against a returned decision barrier. Anything more is technically superfluous, but since it's not straightforward to find just the support vectors, we check against all seen points.
Testing for accuracy is a ML application, and I feel it's beyond the scope of the exercise, and, possibly, the website, since I can't remember seeing another exercise which goes beyond checking for algorithm correctness. I feel that, other than the novelty of the demonstration of checking unseen points, it doesn't seem to add anything substantive since it does not aid in testing for correctness.
Other notes:
tests.toml
, since this is not fromproblem-specifications
, but I could create one manually using UUIDs generated by configlet, if desired.