Trustable Visual Intelligence
The trustability-quotient of a cyber-physical system derives from answers to questions like: can we understand how its algorithms work, can we assess its safety and reliability, can its actions be accounted for, is it tamper-proof and secure, and can we probe into its methods and ask questions. With this action-rationale in mind, the ‘Trusted Visual Intelligence Lab’ of iHub-Drishti will be focusing on the design of dependable software solutions for CV, AR and VR systems for a multitude of problem-areas like autonomous systems, telemedicine, biosphere, document analysis, industry 4.0, and so on.
The lab aims to discover diverse approaches towards the achievement and integration of fairness, robustness, explainability, and accountability - across the entire life-cycle of system design. Potential work trajectories under the lab would be devising: a) mechanisms to quantify vulnerabilities of underlying system architectures; b) processes for bias detection and mitigation across datasets and models; c) strategies to induce different levels of governability, explainability, and interpretability into the foundational models; and d) user-interface design for ease in communication with- and functional evaluation of- explainable systems.