Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add summation #67

Open
wants to merge 8 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions pyannote/metrics/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -247,6 +247,21 @@ def __iter__(self):
for uri, component in self.results_:
yield uri, component

def __add__(self, other):
cls = self.__class__
result = cls()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we find a way to make sure result is initialized with the same options (e.g. collar and skip_overlap for DiarizationErrorRate instances) as self and other?

This probably means adding some kind of sklearn-like mechanism to clone metrics.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I cribbed this PR from some monkey patching I did a couple of years ago for an internal SAD scoring tool and in that context, default parameter values were being used, so the issue didn't come up. After looking at how sklearn handles this, maybe we could add a similar method to ensure the resulting instance is initialized with the same arguments as the first summand. If so, I should also document that sum([m1, m2, ...]) assumes all metrics were initialized identically (reasonable).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made an initial attempt at a clone function, which also required me to implement a get_params method for metrics. NOTE that get_params makes the assumption that all subclasses of BaseMetric include **kwargs in their signature and pass these keyword arguments to the constructor of the superclass (or if multiple superclasses, to one of them). Should this assumption be violated, weirdness could ensue.

An alternate approach would be that used within sklearn, which bans use of *args and **kwargs within constructors and forces each metric to be explicit about its parameters. This would require touching more lines of the codebase, but beyond being a bit of a chore, shouldn't be difficult to implement.

Comment on lines +326 to +327
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once clone becomes a method of BaseMetric, we would do:

Suggested change
cls = self.__class__
result = cls()
result = self.clone()

result.results_ = self.results_ + other.results_
for cname in self.components_:
result.accumulated_[cname] += self.accumulated_[cname]
result.accumulated_[cname] += other.accumulated_[cname]
return result

def __radd__(self, other):
if other == 0:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this how the built-in sum function initializes its own accumulator?
Would be nice to add a quick comment mentionning this...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The built-in sum actually has a start parameter that controls the initial value of the summation. As you might gather, the default value is for start is 0, so I just hard coded that value as an additive identity for metrics. Probably would be good to add a one or two line comment to this effect to save someone having to read up on __radd__ and sum.

return self
else:
return self.__add__(other)

def compute_components(self,
reference: Union[Timeline, Annotation],
hypothesis: Union[Timeline, Annotation],
Expand Down
95 changes: 95 additions & 0 deletions tests/test_base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
#!/usr/bin/env python
# encoding: utf-8

# The MIT License (MIT)

# Copyright (c) 2020 CNRS

# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:

# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.

# AUTHORS
# Hervé BREDIN - http://herve.niderb.fr


import pytest

from pyannote.core import Annotation
from pyannote.core import Segment
from pyannote.core import Timeline
from pyannote.metrics.detection import DetectionAccuracy


import numpy.testing as npt

# rec1
#
# Time 0 1 2 3 4 5 6
# Reference |-----|
# Hypothesis |-----|
# UEM |-----------------|

# rec2
#
# Time 0 1 2 3 4 5 6 7
# Reference |--------|
# Hypothesis |-----|
# UEM |--------------------|


@pytest.fixture
def reference():
reference = {}
reference['rec1'] = Annotation()
reference['rec1'][Segment(0, 2)] = 'A'
reference['rec2'] = Annotation()
reference['rec2'][Segment(1, 4)] = 'A'
return reference


@pytest.fixture
def hypothesis():
hypothesis = {}
hypothesis['rec1'] = Annotation()
hypothesis['rec1'][Segment(1, 3)] = 'A'
hypothesis['rec2'] = Annotation()
hypothesis['rec2'][Segment(3, 4)] = 'A'
return hypothesis


@pytest.fixture
def uem():
return {
'rec1': Timeline([Segment(0, 6)]),
'rec2': Timeline([Segment(0, 7)])}


def test_summation(reference, hypothesis, uem):
# Expected error rate.
expected = 9 / 13

# __add__
m1 = DetectionAccuracy()
m1(reference['rec1'], hypothesis['rec1'], uem=uem['rec1'])
m2 = DetectionAccuracy()
m2(reference['rec2'], hypothesis['rec2'], uem=uem['rec2'])
npt.assert_almost_equal(abs(m1 + m2), expected, decimal=3)

# __radd__
m = sum([m1, m2])
npt.assert_almost_equal(abs(m), expected, decimal=3)