You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The default implementation seems to separate overlapping segments into different lines (vertically), by parsing the start and end times of all segments, linearly from left to right. [source]
While this is pretty useful visually, there is another way to handle overlaps: plot differently-labelled segments on separate lines. In particular, this is useful if the different labels represent different audio environments and you want to visualize differences in your model's performance across environments/labels.
The actual implementation of this is pretty simple, and adds only three new lines of code:
def plot_annotation(self, annotation: Annotation, ax=None, time=True, legend=True, separate_by="optimal"):
if not self.crop:
self.crop = annotation.get_timeline(copy=False).extent()
cropped = annotation.crop(self.crop, mode='intersection')
labels = cropped.labels()
labels_dict = {label:i for i, label in enumerate(labels)}
segments = [s for s, _ in cropped.itertracks()]
ax = self.setup(ax=ax, time=time)
for (segment, track, label), y in zip(
cropped.itertracks(yield_label=True),
self.get_y(segments)):
if separate_by == "labels":
y = 1. - 1. / (len(labels) + 1) * (1 + labels_dict.get(label))
self.draw_segment(ax, segment, y, label=label)
Let me know if this works and i can make a PR!
An example of this implementation at work. The model clearly underperforms in the SYS label, by comparison:
The text was updated successfully, but these errors were encountered:
The default implementation seems to separate overlapping segments into different lines (vertically), by parsing the start and end times of all segments, linearly from left to right. [source]
While this is pretty useful visually, there is another way to handle overlaps: plot differently-labelled segments on separate lines. In particular, this is useful if the different labels represent different audio environments and you want to visualize differences in your model's performance across environments/labels.
The actual implementation of this is pretty simple, and adds only three new lines of code:
Let me know if this works and i can make a PR!
An example of this implementation at work. The model clearly underperforms in the
SYS
label, by comparison:The text was updated successfully, but these errors were encountered: