top of page

Explainable Object Recognition

Problem

Solution

Result

Our client was a medium enterprise with a product portfolio that in many ways rely on object recognition. In light of upcoming regulatory changes, they recognized the need to become able to provide explanation for the conclusions of their system.

TechnoLynx researched and developed a number of iterations on this problem, starting from progressive constructive systems to classical machine learning pipelines.

​

Our core idea was that since a good picture tells more than a thousand words, it would be best to design an object recognition algorithm that is heavily intertwined with image synthesis, hence the intermediate status becomes interpretable, should we able to bind these interpretations against certain targets.

​

For this project we used a combination of OpenCV, scikit learn and PyTorch.

By the end of the project, the client was satisfied both with the achieved level of accuracy as well as the level of explainability that our system offered.

​

However, it has also been identified that our proposed system is in a way an overkill from this latter perspective, and under operating conditions the burden of maintenance may be higher than for a pure deep learning, black box system, so our further projects with them took us back to that domain.

bottom of page