You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How about this as a return object for both the annotate and forget endpoints?
{ "data": [current return object], "stats": {name, value for model fit stats} }
RE: I think that makes lots of sense. Also, appending the new stats objects to the current backend response is super quick to do too.
I think we should just decide what the stats object should contain, and which metric we want to calculate.
One first hypothesis can be to generate a per-class score (e.g. per-class precision), along with an overall score for all the faces currently on the board.
In other words, the stats object can have the following structure:
{
stats: {
happy: happy score in [0, 1]
sad: sad score in [0, 1]
angry: angry score in [0, 1]
disgust: disgust score in [0, 1]
fear: fear score in [0, 1]
surprise: surprise score in [0, 1]
overall: overall global score in [0, 1]
}
When a new face will come in, updated scores for old 24 + 1 faces on the board will be returned.
Would that make any sense?
The text was updated successfully, but these errors were encountered:
Continuing in this issue the conversation started in the latest PR
(@OliverDavis comment)
RE: I think that makes lots of sense. Also, appending the new
stats
objects to the current backend response is super quick to do too.I think we should just decide what the
stats
object should contain, and which metric we want to calculate.One first hypothesis can be to generate a per-class score (e.g. per-class precision), along with an overall score for all the faces currently on the board.
In other words, the
stats
object can have the following structure:When a new face will come in, updated scores for old 24 + 1 faces on the board will be returned.
Would that make any sense?
The text was updated successfully, but these errors were encountered: