Login [Center] Logout Join Us Guidelines  I  中文  I  CQI

Hot Topics in Computer Science 2012

hits:
February 01,2012

2012 Spring, Hot Topics in Computer Science
(13:30-15:05, Friday, Rm 112 /102, Tsinghua Xuetang)
Week No. Date Lecturer Title
1

Feb. 24
6A112

 Periklis Papakonstantinou  Nisan's pseudorandom generator
    ppapakons@gmail.com

The main application of pseudorandom generators in COmplexity Theory is in fo "universal derandomization" (i.e. make every efficient randomized algorithm deterministic, by just replacing its random bits with pseudorandom ones).

20 years ago Noam Nisan constructed *unconditionally* (i.e. not relying on any unproved complexity theory assumptions) a pseudorandom generator that fools space-bounded computation. Today this is known as "Nisan's PRG" (PRG=PseudoRandom Generator).

In particular, Nisan's result implies that we can use (logn)^2 random bits to construct an (almost exponentially longer!) string of length n, *such that* this string of length n looks random to every algorithm which has working memory of size O(logn). As a consequence (after we appropriately formalizing what "looks random" means)  we can use this string to replace a truly random string of length n.

How to use Nisan's PRG to derandomize O(logn)-space bounded algorithms? Just run Nisan on each of the 2^O( (logn)^2 ) many possible "random seeds" and then simulate your favorite log-space algorithm where instead of using truly random coins use Nisan's output. At the end decide by majority (i.e. say YES if and only if most of the outputs are YES). Unfortunately, this does not derandomize the computation in small time or logarithmic space. The time is O(2^(logn)^2)=O(n^logn) (which is super-polynomial) and the space is (logn)^2. For the adventurous student: try to construct a better generator that stretches e.g. logn many random bits to n. This is a famous open problems (but one that doesn't seem so difficult).

Since, it appeared Nisan's generator and its variants has found many new and diverse applications: removing randomness in streaming computation, reducing the randomness in distributed computation and so on. In this lecture we will probably won't have the chance to see the applications, but I strongly recommend you checking out the literature!

Periklis Papakonstantinou is an Assistant Professor at the Institute for Interdisciplinary Information Sciences, Tsinghua University. Right before that he obtained a PhD in Computer Science and an MSc in Mathematics, and before that an MSc in Computer Science, all from the University of Toronto. His undergraduate studies were in Computer Engineering and Science, University of Patras, and he is a licensed Electronics Engineer. His research interests are in Computational Complexity and Cryptography at large. He has published in various areas of Theoretical Computer Science and related disciplines, and he holds several academic distinctions.
 
2 Mar. 2  Andrej Bogdanov  (Fully) homomorphic encryption
     andrejb@cse.cuhk.edu.hk
Fully homomorphic encryption was a technology envisioned in the late 1970. It allows for arbitrary secure computation on encrypted data. Alice wants Bob to do a computation for her, but she doesn't want to tell him any of her data. Using homomorphic encryption, Bob can do the computation without having any idea what he is computing. Yet the answer he obtains makes perfect sense to Alice.

Until very recently nobody knew how to go about desiging fully homomorphic encryption schemes. There are now several proposals, and even an implementation. I will tell you about some ideas that make them work and what you can do to improve them.

 
3 Mar. 9
6A112
 Danny Ziyi Chen  Algorithmic Problems and Solutions in Computer-Aided Radiation Cancer Treatment and Related Biomedical Applications
    dchen@cse.nd.edu

Today's computer technologies have entered the arena of medical diagnosis and treatment, together with new advances in biomedical imaging, radiation physics, computer vision, instrumentation, and robotics, to achieve minimally invasive surgery and therapy.

Image-guided computer-aided radiation therapy and surgery is an emerging interdisciplinary area that seeks to develop state-of-the-art computer technologies for the diagnosis, planning, and optimization of medical therapy and surgery for cancer treatment. In this talk, we present new developments and trends in this important and exciting area.

We discuss a set of core computational problems in image-guided radiation therapy and surgery, present efficient algorithms for them, and show experimental and clinical results.  The previous
best-known solutions for these problems in medical literature, which are used in daily clinical treatment, are all heuristics that do not guarantee good quality of their solutions and may take long
computation time.  Our new algorithms, based on graph-theoretical approaches and geometric optimization techniques, are efficient and achieve clinical treatment of substantially better quality.
Experimental studies and clinical applications of our algorithms for real cancer treatment cases are also discussed.

Dr. Danny Ziyi Chen received the B.S. degrees in Computer Science and in Mathematics from the University of San Francisco, California in 1985, and the M.S. and Ph.D. degrees in Computer Science from Purdue University, West Lafayette, Indiana, USA in 1988 and 1992, respectively. He has been on the faculty of the Department of Computer Science and Engineering at the University of Notre Dame, Indiana, USA since 1992, and is currently a Professor.

Dr. Chen's main research interests are in the areas of computational geometry, algorithms and data structures, computational biomedicine, biomedical imaging, VLSI, and data mining.  He has published many papers in these areas, and holds four US patents on technical inventions for radiation cancer
treatment and biomedical imaging. Dr. Chen received the CAREER Award of the US National Science Foundation (NSF) in 1996, the James A. Burns, C.S.C. Award for Graduate Education of the University of Notre Dame in 2009, and was selected as a Laureate in the 2011 Computerworld Honors Program for the work of his team on "Arc-Modulated Radiation Therapy".
4 Mar. 16  Peter Bro Miltersen  Recent result on Howard's algorithm
    bromille@cs.au.dk
Abstract: Howard's algoritme (a.k.a. policy iteration) is a standard algorithm for solving Markov decision processes. Furthermore, generalizations of the algorithm solve various kinds of two-player games, including parity games, concurrent reachability games and Shapley's stochastic games. The worst case time complexity of the algorithm was until recently poorly understood, but in recent years, a surge of publications on the topic have given us an almost complete understanding of its complexity in the various settings in which it applies. We shall give a survey of these results and explain how they unexpectedly led to the answer to a classical open question concerning the complexity of a randomized variant of the simplex algorithm for linear programming. We shall also present the open problems that remain.

Peter Bro Miltersen is professor of computer science at Aarhus University, Denmark. He is head of the Danish part of the Tsinghua-Aarhus Center for the Theory of Interactive Computation (CTIC), a joint center created by IIIS and Aarhus University in 2011. His research interests are computational complexity theory, game theory and algorithms.

5 Mar. 23
6A112
 Kihwan Kim Quantum computation and simulation with trapped ions
  kimkihwan@mail.tsinghua.edu.cn
In this lecture, I will introduce the fundamental idea of quantum computation with trapped ions system and discuss how the system performs a quantum simulation that would be intractable in conventional computation. At first, I will remind the basic procedure of quantum computations and explain the experimental realization of the operations with a string of trapped ions. Then, I will elucidate the important principle to produce a non-trivial quantum gate, which is closely related spin-spin interaction in a quantum simulation. I will also present concrete methods to generate various spin models, which are generally impossible to exactly calculate, including Ising, XY and Heisenberg models. Finally I will show a couple of pioneering experimental demonstrations in line with the implementation of the quantum simulation and discuss the prospect of the trapped ion quantum simulator to outperform classical computations in near future.
 
6 Mar. 30  Christos Papadimitriou hot topics in algorithmic mechanism design: from generalizations of Meyerson's auction to the approximability deficit of truthfulness
    christos@cs.berkeley.edu
hot topics in algorithmic mechanism design: from generalizations of Meyerson's auction to the approximability deficit of truthfulness.
Christos H. Papadimitriou is C. Lester Hogan Professor of Computer Science at UC Berkeley. Before joining Berkeley in 1996 he taught at Harvard, MIT, Athens Polytechnic, Stanford, and UCSD. He has written five textbooks and many articles on algorithms and complexity, and their applications to optimization, databases, AI, economics, evolution, and the Internet. He holds a PhD from Princeton, and honorary doctorates from ETH (Zurich), the Universities of Macedonia (Thessaloniki), Athens, Cyprus, and Patras. He is a member of the National Academy of Sciences, the American Academy of Arts and Sciences and of the National Academy of Engineering, and a fellow of the ACM. His second novel "Logicomix" has been translated in over 20 languages.
7 Apr. 6  Uri Zwick  Randomized pivoting rules for the simplex algorithm
    zwick@tau.ac.il
The simplex algorithm is one of the most widely used algorithms for solving linear programs in practice. It's worst case complexity, however, with essentially all known deterministic pivoting rules is exponential. There are, however, a randomized pivoting rule under which the expected running time of the simplex algorithm is subexponential.I will describe this pivoting rule, known as Random-Facet, discovered independently by Kalai and by Matousek, Sharir and Welzl, and sketch a very recent proof that it is not polynomial.

Prof. Zwick is a Professor of Computer Science in Tel Aviv University. He received his Ph.D. degree from Tel Aviv University in 1989. He spent two years as Postdoc in Warwick University in the United Kingdom, and a sabbatical year in Berkeley. He is also a member of the Chair Professor team of IIIS and visit Tsinghua University often. He works on algorithms, complexity and mathematical games. He received the Lester R. Ford award and the Robbins prize from the Mathematical Association of America for his papers on the overhang problem.

8 Apr. 13  Robert Tarjan  Soft Heaps
    prof.tarjan@gmail.com
Abstract: In 1998, Bernard Chazelle introduced a new kind of meldable heap (priority queue) called the soft heap.  Soft heaps trade accuracy for speed: the heap operations are allowed to change the keys of certain items, thereby making these items bad, as long as the number of bad items in the data structure is not too big.  In return, heap operations take sublogarithmic, or even constant time, depending on the allwed fraction of bad items.  Soft heaps have several applications, including a new fast algorihm of selection and a fast deterministic algorithm for finding minimum spanning trees.  This talk will present a simplified version of soft heaps developed by Haim Kaplan, Uri Zwick, and the speaker.
Robert E. Tarjan is the James S. McDonnell Distinguished University Professor of Computer Science at Princeton University and a Senior Fellow at HP labs.  His main research areas are the design and analysis of data structures and graph and network algorithms.  He has also done work in computational complexity and security.  He has held positions at Cornell University, U. C. Berkeley, Stanford University, NYU, Bell Laboratories, NEC, and InterTrust Technologies.  He is a member of the National Academy of Sciences, the National Academy of Engineering, the American Philosophical Society, and the American Academy of Arts and Sciences.  He was awarded the Nevanlinna Prize in Informatics in 1982 and the Turing Award in 1986.
9 Apr. 20  Jia Xu Sequence Segmentation and Alignment for Statistical Machine Translation
    xujjia@gmail.com  
In the last decade, while statistical machine translation has advanced significantly, there is still much room for further improvements relating to many natural language processing tasks. Human language is composed of sequences of meaningful units. These sequences can be words, phrases, sentences or even articles serving as basic elements in communication and components for computational modeling. However, in monolingual text some sequences are not naturally separated by delimiters, and in bilingual text both sequence boundaries and their corresponding translations can be unlabeled. This talk addresses solutions of sequence segmentation and alignment for statistical machine translation, including the following topics: Chinese word segmentation, Phrase training, Parallel sentence exploitation, and Domain adaptation. Experimental results on large-scale Chinese-English tasks demonstrate that both the translation quality and training speed can be improved using proposed methods.
Dr. Jia Xu is an assistant professor at IIIS in Tsinghua University. She received her Ph.D. in Computer Science from RWTH-Aachen University in October 2010, where she worked as a research assistant in Hermann Ney's group. During 2007 and 2008, she was a research intern at the Speech Group of IBM Waston and then at the Natural Language Processing Group of Microsoft Research Redmond. She was also a senior researcher and project leader at DFKI in Germany before coming to IIIS. Her main research interests include Pattern Recognition, Human Language Technology, Machine Learning and Computational Linguistics.
10 Apr. 27  Yu Yu  Cryptography with Compromised Secrets
    YUYU0002@e.ntu.edu.sg
Modern cryptograhpy has been tremendously successful in basing various tasks on computationally intractable problems provided that the underlying secrets have not been compromised or tampered with by any malicious party. However, recently (in the late 90's) this precondition was found to be not always respected: keys and intermediate results can be compromised, flipped or modified due to various forms of physical attacks, which break the implementations of many "mathematically unbreakable" crypto-systems, such as AES, RSA and 3DES. In this talk, we introduce several forms of physical attacks, including side-channel attacks and more invasive attacks, and then present formal models, constructions and security analysis that provide provable security (i.e. via the reductionist approach) agaist these attacks. We will also look at open problems and promising research directions.
Yu Yu is an assistant professor at the Institute for Interdisciplinary Information Sciences, Tsinghua University. He received his B.Sc degree from Fudan University in 2003, and then his Ph.D from Nanyang Technological University in 2006. He worked as cryptographic analyst at the ICT security lab of T-Systems Singapore from 2006 to 2008, post-doctoral researcher at the UCL Crypto Group (Belgium) from 2008 to 2010, and associate professor at East China Normal University in 2011.
11 May. 4    
 
12 May.11  Weiying Ma
    wyma@microsoft.com
The Future of Web Search and a New Moore’s Law in Knowledge Engineering
In history, Moore’s law has been used to describe the phenomenon of exponential improvement in technology when it has a virtuous cycle that makes technology improvement proportional to technology itself. For example, chip performance had doubled every 18-24 months because better processors support the development of better layout tools that support the development of even better processors. In this talk, I will describe a new Moore’s law that is being created in knowledge engineering and it is driven by the self-reinforcing nature of three trends and technical advancements: big data, machine learning, and Internet economics. I will explain how we can take advantage of this new effect to develop a new generation of semantic and knowledge-based search engines. Specifically, my presentation will cover the following three areas: 1) Knowledge acquisition – our goal is to build a large and comprehensive entity graph and knowledge graph to complement the web graph and social graphs. I will introduce techniques for entity extraction and knowledgebase construction through interactive mining and crowdsourcing. 2) Managing knowledge – our goal is to support advanced analytical queries by combining probabilistic knowledge with a distributed platform. 3) Knowledge-empowered search and applications – the knowledge we have acquired and curated enables many applications. In particular, I will show how it can be used to understand queries, enable new entity-centric search experiences, and provide direct answers to natural language queries.
13 May. 18  Yiyu Shi
yshi@mst.edu
Order Statics 101: From Horse Racing to At-Speed Testing
Abstract: Horse racing is the second largest sport in the United Kingdom, attracting more than 6 million admissions per annum and 2 million individual fans. Approximately 12 billion British pounds are estimated as betting turnover. At-speed testing is a common technique to identify possibly bad microprocessors due to process variations by testing a selected number of paths. These two seemingly irrelevant topics have one thing in common: both need extensive computer-based Monte Carlo simulations for statistical ordering, and may take an unacceptably long runtime to complete. In this presentation, I'll show how order statistics, from its root to its full blossom, comes at rescue with elegant algorithms of polynomial complexity. While the concepts and algorithms can be widely applied to many areas where statistical information is involved, at least you will become a "statistical winner" in your next horse racing bet!
Bio:
Dr. Yiyu Shi received his B.S. degree in Electronic Engineering from Tsinghua University, Beijing, China in 2005, the M.S and Ph.D. degree in Electrical Engineering from the University of California, Los Angeles in 2007 and 2009 respectively. In Sept. 2010, he joined the faculty of Electrical and Computer Engineering Department at Missouri University of Science and Technology (formerly University of Missouri, Rolla) as an assistant professor, and is currently the site associate director of National Science Foundation I/UCRC Net-Centric Software and Systems Center. He has served on the technical program committee of several top conferences including ICCAD, ICCD, and ISPD, and as guest editor (in-chief) for a few international journals. His research interest include advanced design and test technologies for 3D ICs, and smart grid applications. In recognition of his research, five of his papers have been nominated for the Best Paper Award in top conferences (DAC'05, ICCAD'07, ICCD'08, ASPDAC'09, DAC'09). He was also the recipient of the IBM Invention Achievement Award in 2009, and the second placer winner of the TAU power grid analysis contest (sponsored by IBM) in 2011, and the third place winner of the ISPD discrete gate sizing contest in 2012. For more details, please visit http://ece.mst.edu/~yshi
14 May. 25 Jiawei Han
hanj@cs.uiuc.edu
Data Mining in Heterogeneous Information Networks  
Short Bio:
Jiawei Han, Bliss Professor of Computer Science, University of Illinois at Urbana-Champaign. He has been researching into data mining, information network analysis, database systems, and data warehousing, with over 600 journal and conference publications. He has chaired or served on many program committees of international conferences, including PC co-chair for KDD, SDM, and ICDM conferences, and Americas Coordinator for VLDB conferences. He also served as the founding Editor-In-Chief of ACM Transactions on Knowledge Discovery from Data and is serving as the Director of Information Network Academic Research Center supported by U.S. Army Research Lab. He is a Fellow of ACM and IEEE, and received 2004 ACM SIGKDD Innovations Award, 2005 IEEE Computer Society Technical Achievement Award, 2009 IEEE Computer Society Wallace McDowell Award, and 2011 Daniel C. Drucker Eminent Faculty Award at UIUC. His book "Data Mining: Concepts and Techniques" has been used popularly as a textbook worldwide.
Abstract:
Many objects in the real world are interconnected, forming complex heterogeneous but semi-structured information networks. Different from some studies on social network analysis where friendship networks or web page networks form homogeneous information networks, heterogeneous information networks reflect complex and structured relationships among multiple typed objects. For example, in a university network, objects of multiple types, such as students, professors, courses, departments, and multiple typed relationships, such as teach and advise are intertwined together, providing rich information.
We explore new methodologies for mining hidden knowledge in such heterogeneous information networks, including integrated ranking and clustering, classification, data integration, trust analysis, role discovery and prediction. We show that structured information networks are informative, and link analysis on such networks is powerful at uncovering critical knowledge hidden in large networks. We also present a few promising research directions on mining heterogeneous information networks. 
15 Jun. 1 Prof. CHING Pak-Chung pcching@ee.cuhk.edu.hk 
Digital Speech Coding 
Speech is the natural mean of human communication which we all take it for granted. However, to facilitate long haul communication between people living far apart, speech waveform need to be transmitted as electrical signals either via wire-lines or wireless media. There are obvious advantages of transmitting speech signals in digital form, for instance, digital transmission offers resilience performance against noise and interference.
Speech signals can be encoded and digitally transmitted in many difference ways and the major factors in deciding which method to be adopted included but not limited to transmission rate, reception speech quality and complexity of implementation. In this talk, we shall cover the basic speech properties, the speech production mechanism as well as many types of speech encoding techniques such as waveform coding, linear predictive coding as well as sub-band coding.
16 Jun. 8  
Transforming Healthcare through Data
With over 16% of US’s GDP devoted to healthcare and all world governments working toward improving healthcare for their people, making healthcare more effective and efficient is becoming a great opportunity for the technology industry. In this talk, I will present several opportunities to create infrastructure to extract actionable insights from healthcare related data in the areas of enterprise data collected in hospitals and personal data collected by individuals.
Holidays are marked in red.