Optimization problems arise naturally in many applications of computer science and engineering. Unfortunately, many important optimization problems are NP-hard to solve. To get around the NP-hardness barrier, one approach is to design efficient algorithms that return a suboptimal solution close to the optimum. Formally speaking, an algorithm is called an α-approximation algorithm if it guarantees to output a solution that is within an α factor of the optimum.
Given an NP-hard optimization problem, it is natural to ask the following question: “What is the best approximation ratio that any polynomial-time algorithm could achieve?"
In this talk, we will show general techniques of answering the above question for a variety of optimization problems arising from different areas such as combinatorial optimization, machine learning, and economics.
Yi Wu is a Postdoctoral Researcher at IBM Alamaden Research. He did his Ph.D. from Carnegie Mellon University and his undergraduate from Tsinghua University. His dissertation, entitled "The Approximability of Learning and Constraint Satisfaction Problems", won the CMU SCS Distinguished Dissertation Award in 2010. His main research interests include approximation algorithm, optimization, learning and complexity theory.