Bio: Aishwarya Agrawal is an Assistant Professor in the Department of Computer Science and Operations Research at University of Montreal. She is also a Canada CIFAR AI Chair and a core academic member of Mila — Quebec AI Institute. She also spends one day a week at DeepMind (Montreal office) as a Research Scientist. Aishwarya completed her PhD in Aug 2019 from Georgia Tech, working with Dhruv Batra and Devi Parikh. Aishwarya’s research interests lie at the intersection of computer vision, deep learning and natural language processing. Aishwarya is a recipient of the Canada CIFAR AI Chair Award, Georgia Tech 2020 Sigma Xi Best Ph.D. Thesis Award, Georgia Tech 2020 College of Computing Dissertation Award, 2019 Google Fellowship (declined), Facebook Fellowship 2019-2020 (declined) and NVIDIA Graduate Fellowship 2018-2019.
Keynote talk 1: Vision and Language: Progress and Challenges
In this talk, I will provide a brief overview of the progress we have made so far in vision and language research, highlighting various vision and language tasks and modelling paradigms. Then I will discuss some challenges that exist currently in vision and language research, focusing on the problems of language priors and visual grounding, generalization to unseen data-distributions and stringent evaluation metrics in the context of Visual Question Answering (VQA).
Bio: Tegan is an Assistant Professor in the Faculty of Information at the University of Toronto, an affiliate of the Vector Institute and Schwartz-Reisman Institute for Technology and Society. She is also a managing editor at the Journal of Machine Learning Research (JMLR), the top scholarly journal in machine learning, and co-founder of Climate Change AI (CCAI), an organization which catalyzes impactful work applying machine learning to problems of climate change. Prior to joining the iSchool, Tegan did her PhD at Mila and Polytechnique Montreal, where she was an NSERC and IVADO awarded scholar with Chris Pal. Her recent research has two themes (1) Real-world generalization, learning theory, and practical auditing tools (e.g. unit tests, sandboxes) to empirically evaluate learning behaviour or simulate deployment of an AI system (2) Deep representation learning & predictive methods in ecological dynamical systems for impact assessment, policy analysis, and risk mitigation, especially for common-good problems.
Keynote talk 2: Practical Directions for Responsible AI Development
Artificial intelligence (AI) systems are increasingly deployed in real-world settings, but we lack a rigorous science to understand or predict their behavior in these settings. Even when we can formalize the problem we’re addressing in quite clear statistical terms (e.g. supervised learning on a fixed dataset), there is much we still do not understand about how and why deep nets are able to generalize as well as they do, why they fail when they do, how they will perform on out-of-distribution data. . To address these questions, I study AI systems and ‘what goes into’ them – not only data, but the broader learning environment including task design/specification, loss function, and regularization, as well as the broader societal context of deployment, including privacy considerations, trends and incentives, norms, and human biases. Concretely, this involves techniques like designing unit test environments to empirically evaluate learning behaviour, or sandboxing to simulate deployment of an AI system. This talk will give an overview of my work, which seeks to contribute understanding and techniques to the growing science of responsible AI development, while usefully applying AI to high-impact ecological problems including climate change, epidemiology, and ecological impact assessments.
Pablo Samuel Castro
Bio: Pablo Samuel was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill, eventually obtaining his masters and PhD at McGill, focusing on Reinforcement Learning. He is currently a staff research Software Developer in Google Research (Brain team) in Montreal, focusing on fundamental Reinforcement Learning research, Machine Learning and Creativity, and being a regular advocate for increasing the LatinX representation in the research community. He is also an active musician.
Keynote talk 3: Deep Reinforcement Learning – Challenges and Opportunities
Reinforcement learning (RL) began as a (mostly) theoretical subfield of machine learning, but has achieved a number of success stories over the last 6 years thanks to the use of deep networks (e.g. deep RL). Unfortunately, it has proved difficult to develop theoretical results for deep RL without the need of strong assumptions, resulting in a purely empirical evaluation of the performance of new deep RL algorithms; more often than not, these empirical evaluations are used for demonstrating that proposed methods are “state-of-the-art” relative to existing baselines. However, given the lack of theoretical analyses of deep RL it behooves us, as a community, to gain a better empirical understanding of RL when combined with deep networks. My research over the past year has been largely focused on this question, and I will present a series of recent findings, some surprising difficulties discovered, and opportunities for future research.