-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue 150: Long Short-Term Memory (LSTM) Binary Classifier for texts #153
Issue 150: Long Short-Term Memory (LSTM) Binary Classifier for texts #153
Conversation
0.75 | ||
] | ||
}, | ||
"batch_size": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
batch_size
is currently ignored by the keras adapter, so this line can be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
}, | ||
"hyperparameters": { | ||
"fixed": { | ||
"classification": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A fixed hyperparameter called "verbose" with default false
should be added. See #143 .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, it actually does work. I guess True/False
is internally interpreted as 1/0
. Feel free to change it to int
type.
@@ -67,6 +67,14 @@ | |||
"type": "int", | |||
"default": 20 | |||
}, | |||
"verbose": { | |||
"type": "int", | |||
"default": 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please set the default to 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Hector-hedb12 The primitive looks good and can be merged, but I'm afraid the pipeline does not. According to the Keras examples, this architecture is suitable to be used with tokenized sequences as input.
I think that here we need to:
In order to avoid having this PR stuck I will merge it as it is right now, but I will create a new issue to improve the pipeline. |
Resolves #150