Skip to content

the implementation of a semantic scene recognition and learning mechanism

License

Notifications You must be signed in to change notification settings

EmaroLab/scene_identification_tagging

Repository files navigation

Scene Identification & Tagging (SIT)

A semantic algorithm to learn and recognise composition ob objects based on their qualitative spatial relations. The algorithm is presented here and used for Human-Robot Interaction here. Also, an implementation based on Fuzzy Reasoning is available here and presented here

Algorithm

The algorithm is based on an OWL ontology, the Pellet reasoner and couple of mapping functions. Its main purposes is to describe primitive objects though geometric coefficients that define their shape. Than, qualitative spatial relations, such as: right/left, front/behind, above/below, parallel, perpendicular, coaxial, connected are symbolically computed. Those are mapped in an concrete scene representation (i.e.: an individual). The recognition phase is based on instance checking by looking for the abstract scene representation (i.e.: classes) that classify the scene individual. If those does not exist the algorithm is able to use the concrete scene as a template to learn its abstract class to be used for further classification. Noteworthy, the system is automatically able to reason about similarity between learned scene. Also, it can be the case that a very complex scene is recognised by a relative small number of relation that hold (i.e.: a sub scene). To discriminate when those differences are too height, and trigger a new learning procedure, the concept of confidence is introduce as a number within [0,1].

Dependences

This implementation depends on OWLOOP. It is implemented within the Robotic Operative System (ROS) Java bridge. The compilation is based on Gradle and you can find a complete list of dependencies versions in the buld.gradle file.

Examples and Getting Started

For a quick get starting tutorial, please, check out our examples.

Contacts

Work are progressing to give you more details about the algorithm. In the middle while, for any information, support, discussion or comments do not hesitate to contact me through this Github repository or at: [email protected],