It is not uncommon for a real-world machine learning application to solve multiple tasks these days. These tasks' objectives often correlate, conflict, or compete with each other, making it impossible to find a single globally optimal solution. Therefore, there has been a strong interest in discovering multiple solutions with varying trade-offs across different objectives, known as the Pareto fronts. However, most existing works explore finite, discrete, and sparse Pareto fronts only. In this talk, I will present two of our recent papers about efficiently discovering locally continuous Pareto fronts in machine learning applications. We first show that by leveraging Krylov subspace methods and Pareto optimality conditions, we can efficiently find continuous Pareto optimal solutions in large-scale multi-task learning applications . Next, we will extend this topic to deep reinforcement learning and show we can reconstruct a family of continuous Pareto optimal controllers for robots with a prediction-guided network . I will conclude the talk by proposing a few future research directions that explore continuous Pareto fronts in other deep-learning research frontiers, including Neural Architecture Search (NAS), Meta-Learning, and Self-Supervised Learning (SSL).
 Efficient Continuous Pareto Exploration in Multi-Task Learning, Pingchuan Ma*, Tao Du*, and Wojciech Matusik, ICML 2020.
 Prediction-guided Multi-Objective Reinforcement Learning for Continuous Robot Control, Jie Xu, Yunsheng Tian, Pingchuan Ma, Daniela Rus, Shinjiro Sueda, and Wojciech Matusik, ICML 2020.
Pingchuan Ma is a second-year Ph.D. in computer science at MIT advised by Professor Wojciech Matusik, where he conducts research in the intersection of computer graphics and machine learning. Currently he has been interested in multi-task learning, physical simulation, and reinforcement learning.