Skip to content

Latest commit

 

History

History
58 lines (58 loc) · 2.19 KB

2024-01-23-hamoud24a.md

File metadata and controls

58 lines (58 loc) · 2.19 KB
title abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
ST(OR)$^2$: Spatio-Temporal Object Level Reasoning for Activity Recognition in the Operating Room
Surgical robotics holds much promise for improving patient safety and clinician experience in the Operating Room (OR). However, it also comes with new challenges, requiring strong team coordination and effective OR management. Automatic detection of surgical activities is a key requirement for developing AI-based intelligent tools to tackle these challenges. The current state-of-the-art surgical activity recognition methods however operate on image-based representations and depend on large-scale labeled datasets whose collection is time-consuming and resource-expensive. This work proposes a new sample-efficient and object-based approach for surgical activity recognition in the OR. Our method focuses on the geometric arrangements between clinicians and surgical devices, thus utilizing the significant object interaction dynamics in the OR. We conduct experiments in a low-data regime study for long video activity recognition. We also benchmark our method against other object-centric approaches on clip-level action classification and show superior performance.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
hamoud24a
0
ST(OR)$^2$: Spatio-Temporal Object Level Reasoning for Activity Recognition in the Operating Room
1254
1268
1254-1268
1254
false
Hamoud, Idris and Jamal, Muhammad Abdullah and Srivastav, Vinkle and MUTTER, Didier and Padoy, Nicolas and Mohareri, Omid
given family
Idris
Hamoud
given family
Muhammad Abdullah
Jamal
given family
Vinkle
Srivastav
given family
Didier
MUTTER
given family
Nicolas
Padoy
given family
Omid
Mohareri
2024-01-23
Medical Imaging with Deep Learning
227
inproceedings
date-parts
2024
1
23