|2011 Spring, Hot Topics in Computer Science
（13:30-15:05, Friday, Rm 6A112, 6th Teaching Building）
|1||Feb. 25||Iddo Tzameret
|Satisfiability and Proof Complexity|
|Satisfiability is among the central problems of theoretical computer science. In this lecture we shall consider a basic and very useful algorithm for solving SAT for formulas in conjunctive normal form (CNF), namely the DPLL algorithm. We then prove that there are families of CNF formulas for which DPLL will never terminate in less than exponential time. This will be done by a connection between the run time of DPLL and questions concerning the length of propositional proofs. Specifically, we will prove a famous and important theorem by A. Haken, stating that the pigeonhole principle does not have short resolution proofs.
Iddo Tzameret is a postdoctoral researcher at Tsinghua university, the Institute for Interdisciplinary Information Sciences. He received his PhD degree from Tel Aviv University in 2008. He has been awarded a postdoctoral fellowship by the Eduard Cech Center for Algebra and Geometry (the Czech Republic). His area of research is the theory of computing and complexity theory, with an emphasis on satisfiability and proof complexity.
|2||Mar. 4||Andrew Wan
|Recent results in unconditional derandomization|
|The last few decades have seen an abundance of efficient randomized algorithms for a variety of important problems. Randomness is a resource, and a natural and important question is whether we can remove or decrease these algorithms' reliance on randomness. Since 1997, it has been known that the entire class BPP of efficient randomized algorithms can be simulated by efficient deterministic
algorithms, given the right computational assumptions (this follows a series of works starting with [Yao82] that use progressively weaker assumptions).
At the same time, derandomization results requiring no computational assumptions were discovered for more limited classes of algorithms, such as AC0 circuits and branching programs [Nisan1990,NW1989]. In recent years, progress on the more general question of derandomizing BPP has been slow, but there have been many exciting results for derandomizing restricted classes of algorithms, such as AC0 circuits [Bazzi07,Razborov08,Braverman09,De+10], branching
programs [Braverman+10, Brody+10] halfspaces and polynomial threshold functions
[Meka+10,Diakonikolas+09,10]. I will survey some of these recent results, the techniques they employ, and applications to problems such as approximate counting.
Andrew Wan is an Assistant Professor at the Institute for Theoretical Computer Science, IIIS, Tsinghua University. He received his Ph.D. in Computer Science in 2010 from Columbia University, where he also received his M.Sc. in Computer Science and Bachelor of Arts in Philosophy and Economics. He currently does research in Computational Complexity, Computational Learning Theory, the fundamentals of Cryptography, and the analysis of Boolean functions.
|3||Mar. 11||Hua Qiang-Sheng Charlie
|Wireless Algorithms in the Physical Model|
|In this lecture, I will mainly cover the following four issues:
(1) Wireless Basics, especially the two popular models (the protocol model and the physical model) used in wireless communications;
(2) showing how power control plays a significant role in improving the wireless algorithms performances;
(3) giving some state-of-the-art wireless problems that are intensively studied in both theory and networking communities. Some novel algorithms developed for these problems will also be discussed;
(4) some open problems in wireless algorithms under the physical model and discussions.
Hua Qiang-Sheng Charlie is now an assistant professor in Institute for Interdisciplinary Information Sciences, Tsinghua University. He received his PhD degree in computer science from The University of Hong Kong in July 2009. His research interests are on wireless networking and algorithms.
|4||Mar. 18||John Steinberger
|Information-theoretic methods in randomized algorithms|
|We will look at recent randomized algorithms for k-SAT, for certain classes of k-SAT formulas that are guaranteed to be satisfiable (this doesn't mean *finding* the satisfying assignment is easy). The proofs that these algorithms terminate in linear expected time use novel arguments involving the non-compressibility of random strings; in particular, it is shown that a long running time of the algorithm implies that the algorithm's random tape can be compressed to something much shorter (and a random string can only be compressed with low probability, hence the probability that the algorithm runs
for a long time is low). These are results of Robin Moser, with improvements by Moser and Tardos.
|5||Mar. 25||Joshua Brody
|Property Testing Lower Bounds via Communication Complexity|
In property testing, you're given limited access to a large object X, and you want to design an algorithm that determines if X has some property P, or if X is _far_ from having property P. The goal is to distinguish these two cases _without_ having to process all of X.
Property Testing is an important class of algorithms to deal with massive data sets, and determining how much time is needed to test a property is an ongoing and exciting area of computer science. This talk will give a new method for proving _lower bounds_ on the complexity of several property testing problems, via reductions from communication complexity.
The first half of the lecture will be a crash course in the exciting areas of property testing and communication complexity. In the second half of the lecture, I'll present the new connection between the two subfields and several examples of new testing lower bounds we get from this connection.
The research I present is joint work with Kevin Matulef and Eric Blais.
Joshua Brody is a postdoc at Tsinghua University IIIS. He received his PhD in 2010 from Dartmouth College, where he worked on communication complexity and lower bounds in theoretical computer science under the advisement of Amit Chakrabarti.
|Abstract: Quantum networks based on trapped atomic ions and scattered photons provide a promising way to build a large scale quantum information processor. Such systems promise storing and processing information in a way that could eclipse the performance of conventional computers. We have recently demonstrated generating entanglement and operating gates between two distant trapped ion qubits. We also realize several quantum algorithms including the first quantum random number generator by using remote entangled ions. In particular, enhancing the collection of spontaneous emitted photons from trapped ions is likely to help scaling up atom–photon quantum networks in the next several years. By integrating a micro-fabricated ion trap with a cavity QED system in the intermediate coupling regime, we recently observe an enhancement of the spontaneous emission from a single trapped ytterbium ion into a cavity mode by a factor of hundreds comparing with the free-space emission into the same solid angle. Such ion cavity systems provide a platform to realize a large scale atom-photon quantum network that could possibly close both locality and detection holes in a Bell test experiment. Furthermore, trapped charged particles inside an optical cavity open the door for the potential applications in other fields such as trapping mesoscopic material and biomolecule for optical study.
罗乐 美国联合量子研究所研究科学家和博士后奖获得者。杜克大学物理学博士硕士,北京大学光学硕士, 中山大学物理学学士。十多年来在实验原子分子光物理领域有着活跃的学术研究，涉及冷原子物理，囚禁离子量子计算，原子光子量子网络，空腔电子电动力学，超快光学等多个前沿领域。
|7||Apr. 8||Yvo Desmedt
|The use of polynomials in cryptography|
|Polynomials are a topic from high school mathematics but have amazingly interesting applications in cryptography and combinatorics. The following examples from cryptography are discussed: secret sharing, broadcast authentication (with unconditional security), efficient authentication (with unconditional security), and stream authentication. Mathematical details are provided.
Yvo Desmedt received his Ph.D. (Summa cum Laude) from the University of Leuven, Belgium (1984). He is presently the Chair of Information Communication Technology at University College London. He has held visiting appointments at Macquarie University (Australia), Technion (Israel), Tokyo Institute of Technology (Japan), Universite de Montreal (Canada), University of Karlsruhe (Germany), etc. He is an (associate) editor of Information Processing Letters, The Journal of Computer Security, Computers & Security, and Advanced Mathematics of Communications. He is also the Editor-in-chief of IET Information Security. He was program chair of Crypto 1994, ICITS 2007, PKC 2003, the ACM workshop on Scientific Aspects of Cyber Terrorism 2002, and co-program chair of CANS 2005. He was an invited speaker at conferences and workshop in 5 continents. He has authored over 200 refereed papers. He is an IACR Fellow (since 2010).
|8||Apr. 15||Christos Papadimitriou
|The fixpoint of a Boolean function is HOT!|
|Abstract: TCS research can illuminate the mysteries not only of computation, but all of science. Computation is a powerful lens for understanding nature in the small (quantum mechanics, statistical
physics, cells, neurons) but also in the large (markets, Internet, web, evolution, brain). I will speak about my work in two of these areas, computational aspects of evolution and economics. In the latter, I will
briefly mention three HOT problems (meaning: challenging, deep, and important): PTAS for Nash equilibrium, CLS = P, and the complexity of Bayesian auctions. In the latter, I shall introduce some concepts on which I have been working here in Beijing this week, motivated by a classical experiment by Waddington, and which seem promising to both shed light on evolution and to become productive subjects of study in TCS.
Christos H. Papadimitriou is C. Lester Hogan Professor of Computer Science at UC Berkeley. Before joining Berkeley in 1996 he taught at Harvard, MIT, Athens Polytechnic, Stanford, and UCSD. He has written five textbooks and many articles on algorithms and complexity, and their applications to optimization, databases, AI, economics, evolution, and the Internet. He holds a PhD from Princeton, and honorary doctorates from ETH (Zurich), the Universities of Macedonia (Thessaloniki), Athens, Cyprus, and Patras. He is a member of the National Academy of Sciences, the American Academy of Arts and Sciences and of the National Academy of Engineering, and a fellow of the ACM. His second novel "Logicomix" has been translated in over 20 languages.
|10||Apr. 29||Yi Ma
|For Low-Rank Structures in Images and Data|
|Abstract: In this talk, we will introduce two fundamental computational tools, namely TILT and RASL, for extracting rich low-rank structures in images and videos, respectively. Both tools utilize the same transformed Robust PCA model for the visual data: D◦τ = A + E, and use practically the same algorithm for extracting the low-rank structures A from the visual data D, despite image domain transformation τ and corruptions E. We will show how these two seemingly simple tools can help unleash tremendous information in images and videos that we used to struggle to get. We believe these new tools will bring disruptive changes to many challenging tasks in computer vision and image processing, including feature extraction, image correspondence or alignment, 3D reconstruction, and object recognition, etc.This is joint work with John Wright of MSRA, Emmanuel Candes of Stanford, and my students Zhengdong Zhang, Xiao Liang, Yigang Peng of Tsinghua, Arvind Ganesh of UIUC.
Yi Ma is the research manager of the Visual Computing group at Microsoft Research Asia in Beijing since January 2009. He is also an associate professor at the Electrical & Computer Engineering Department of the University of Illinois at Urbana-Champaign. His main research interest is in computer vision, high-dimensional data analysis, and systems theory. He is the first author of the popular vision textbook "An Invitation to 3-D Vision," published by Springer in 2003. Yi Ma received two Bachelors’ degree in Automation and Applied Mathematics from Tsinghua University (Beijing, China) in 1995, a Master of Science degree in EECS in 1997, a Master of Arts degree in Mathematics in 2000, and a PhD degree in EECS in 2000, all from the University of California at Berkeley. Yi Ma received the David Marr Best Paper Prize at the International Conference on Computer Vision 1999, the Longuet-Higgins Best Paper Prize at the European Conference on Computer Vision 2004, and the Sang Uk Lee Best Student Paper Award with his students at the Asian Conference on Computer Vision in 2009. He also received the CAREER Award from the National Science Foundation in 2004 and the Young Investigator Award from the Office of Naval Research in 2005. He is an associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) and the International Journal of Computer Vision (IJCV). He has served as the chief guest editor for special issues for the Proceedings of IEEE and the IEEE Signal Processing Magazine. He will also serve as Program Chair for ICCV 2013 in Sydney, Australia. He is a senior member of IEEE and a member of ACM, SIAM, and ASEE.
|12||May 13||Wei Chen
|Computational Aspects of Social and Information Networks|
|In this lecture, I will introduce the recent hot trends in the studies of social and information networks. My talk contains two parts. In the first part, I will introduce one research project in our research group --- scalable influence maximization --- to demonstrate our latest research efforts on social and information networks. In the second part, I will introduce several phenomena people discovered in recent years among a wide range of networks, such as small-world phenomenon and power-law degree distributions. There are a lot of aspects of social and information networks being studied and a lot of research opportunities ahead.
Wei Chen is a Lead Researcher in the Theory group of Microsoft Research Asia, and an Adjunct Professor of Tsinghua University. His main research interests include social and information networks, algorithmic game theory and Internet economics, distributed computing, and fault tolerance.
|13||May 20||Hang Li
|Learning to Rank|
|Abstract: Learning to rank refers to machine learning techniques for training the model in a ranking task. Learning to rank is useful for many applications in Information Retrieval, Natural Language Processing, and Data Mining. Intensive studies have been conducted on the problem and significant progress has been made. In this lecture, I will give an introduction to learning to rank, and explain the fundamental problems, existing approaches, and future work of learning to rank. Several learning to rank methods using SVM techniques will be described in details.
Hang Li is senior researcher and research manager at Microsoft Research Asia. He is also adjunct professors at Peking University, Nanjing University, Xian Jiaotong University, and Nankai University. His research areas include information retrieval, natural language processing, statistical machine learning, and data mining. He graduated from Kyoto University and earned his PhD from the University of Tokyo. Hang has about 100 publications in international journals and conferences. His recent academic activities include program committee co-chair of WSDM'11 and EMNLP'10. Hang has been working on development of several products. These include NEC TopicScope, Microsoft SQL Server 2005, Microsoft Office 2007 and Office 2010, Microsoft Live Search 2008，Microsoft Bing 2009, and Bing 2010.
|14||May 27||Xin Tong
|Recent Advances in Graphics and Visualization|
|In this talk, I will introduce recent advances in each field of computer graphics and visualization. Especially I will introduce the state-of-the-art techniques people developed in each subarea, such as geometric modeling, realistic rendering, animation and information visualization. These techniques will give people an rough overview of this exciting field.|
|15||Jun. 3||Shimin Hu
|From Graphics to internet visual media|
With the rapid progress of internet technology, massive visual media data has been emerging on the internet, which brings a big opportunity to the new evolution of information technology, as well as the industry development. Given the massive visual media data, how to find the inherent patterns, systematical management, efficient processing and utilization is a great challenge. Visual media itself has the properties of large file size, no inherent structures, high dimension and rich semantics. In this talk, I will introduce some research of Graphics Group in Tsinghua on Intelligent processing of internet visual media, include visual media analysis, search, systhesis and composition.
Shi-Min Hu received the PhD degree from Zhejiang University in 1996. He is currently a professor in the Department of Computer Science andTechnology,Tsinghua University, Beijing. His research interests include digital geometry processing, video processing, rendering, computer animation, and computer-aided geometric design. He is on the editorial boards of Computer Aided Design, Computer & Graphics and The Visual Computer. He is a member of the IEEE ,ACM and CCF.
|16||Jun. 10||Liwei Wang
|Machine Learning Theory: A Brief Overview|
|Abstract: Machine learning theories what is learnable by a machine. In this talk I will first talk about the classical results in machine learning, including PAC learning, agnostic learning, VC theory and empirical processes. These theories provide necessary and sufficient conditions for a problem to be learnable and characterize the best possible learning rate (in the minimax sense). Very interestingly, the learning algorithms that are most sucessful in applications---boosting and Support Vector Machine (SVM)---are both inspired by these theories. Then I will briefly talk about two lines of recent advances in machine learning. First I will introduce learning theories for variants of the troditional supervised learning paradigms. These include semi-supervised learning, where the learner obtain partiall information from the teacher; and active learning, where the learner is allowed to have interaction with the teacher. Next I will address learning with convex approximations. From an algorithm point of view, learning requires to solve NP-hard non-convex optimization problems. Practically, one instead optimize a convex surrogate. I will show that, surprisingly, under mild conditions solving convex surrogate can lead to the true learning result.
Bio: Liwei Wang received his B.S. and M.S. from department of Electronic Engineering Tsinghua University at 1999 and 2002 respectively, and PhD from school of Mathematics Peking University, 2005. Currently Liwei Wang is an associate professor at department of machine intelligence Peking University. His main research interest is machine learning theory, and has published 30 papers on top journals and conferences. He was named among AI’s 10 to Watch by IEEE Intelligent System Magazine in 2010.