The CSRI has organised the first workshop on Interpretable and Explainable Machine Vision at the British Machine Vision Conference (BMVC) being held in Cardiff this week.
Recent years have seen significant advances in techniques for image processing and machine vision based on breakthroughs in machine learning and artificial intelligence, especially in the area of deep neural networks. However, such techniques are widely viewed as creating “black box” systems that are in some senses “inscrutable”, leading to concerns over their reliability, stability, and trustworthiness. Consequently, there has been a surge of interest in approaches aimed at “opening the black boxes” commonly characterised by the terms interpretability and explainability.
The aim of the workshop is to examine principles and practice for making machine vision techniques - especially ones involving artificial intelligence and machine learning - more explainable and trustworthy for human users.
The workshop will open with a panel of industry experts from Airbus, BAE Systems, and IBM, who will assess theory vs practice in the field. The panel will be followed by a series of technical papers on topics including techniques for interpreting the processing of deep neural networks, visualisation techniques for explanation, and “how to make neural networks lie”.
Visit the website: Workshop: Interpretable & Explainable Machine Vision
Visit the website: British Machine Vision Conference