Fiddler Labs, SRI and Berkeley experts open up the black box of machine learning at TC Sessions: Robotics+AI
As AI permeates the home, work, and public life, it’s increasingly important to be able to understand why and how it makes its decisions. Explainable AI isn’t just a matter of hitting a switch, though; Experts from UC Berkeley, SRI, and Fiddler Labs will discuss how we should go about it on stage at TC Sessions: Robotics+AI on March 3.
What does explainability really mean? Do we need to start from scratch? How do we avoid exposing proprietary data and methods? Will there be a performance hit? Whose responsibility will it be, and who will ensure it is done properly?
On our panel addressing these questions and more will be two experts, one each from academia and private industry.
Trevor Darrell is a professor at Berkeley’s Computer Science department who helps lead many of the university’s AI-related labs and projects, especially those concerned with the next generation of smart transportation. His research group focuses on perception and human-AI interaction, and he previously led a computer vision group at MIT.
Source: Tech Crunch