IIIS - SNC Seminar (Systems, Networks, and Communications)

Welcome to the SNC seminar homepage! This homepage provides seminar information for the SNC seminar series at IIIS @ Tsinghua University. This seminar series aims at bringing together researchers interested in systems, networks, communications, and other related fields.

If you are interested in giving a talk, please contact our organizers Longbo Huang and Wei Xu.

Fall 2015

Student organizer – Yue Yu

Tue Dec 22 2015 – Cyberphysical systems: Distributed computing, Synchronization, and Data Mining

Speaker: Prof. Nick Freris, New York University, Abu Dhabi

Location and Time: FIT 1-312, Tue Dec 22 2015, from 2pm-3pm

Abstract: We are entering the era of cyberphysical systems (CPS), i.e., very large networks in which collaborating intelligent agents possessing sensing, communication and computation capabilities are interconnected for controlling physical systems via complex real-time operations. The design of such systems imposes many challenges, most notably: a) decentralized coordination, b) efficient resource allocation and c) mining information from big data generated by thousands, possibly millions of nodes. In this talk, I will present results on distributed and asynchronous management of CPS, in specific: computing, clock synchronization, and exact data mining from inexact big data.

We propose and analyze a randomized iterative algorithm for solving large-scale linear systems. The scheme has exponential convergence and is amenable to distributed implementation. Our method demonstrates substantial speed-up for sparse systems over state-of-art linear solvers. We leverage the analysis to propose a new design method for randomized gossip algorithms for achieving network-wide consensus.

Clock synchronization is indispensable for a CPS to perform as a whole via decentralized actions, and real-time applications impose stringent constraints on synchronization accuracy. We present fundamental limits on synchronizing clocks in a network, and in fact prove that clock synchronization is generally impossible. Inspired by the system implications of our theory as well as our results on gossiping, we design novel synchronization protocols with improved accuracy, convergence speed, as well as energy savings.

A large CPS inevitably generates big data, and efficient information retrieval in the real-time is required. The performance of similarity searchclassification is highly dependent on distance estimation from compressed data. We develop a fast algorithm to obtain the tightest upperlower bounds on Euclidean distance between data series. Extensive experiments indicate a significant speed-up of search schemes due to the effective pruning resulted from accurate distance estimation.

Biography: Nick Freris is currently an assistant professor of Electrical and Computer Engineering at New York University Abu Dhabi, and the director of Cyberphysical Systems Laboratory (CPSLab). He is also a member of the Center for Interdisciplinary Studies in Security and Privacy (CRISSP). He received the Diploma in Electrical and Computer Engineering from the National Technical University of Athens (NTUA), Greece in 2005 and the M.S. degree in Electrical and Computer Engineering, the M.S. degree in Mathematics, and the Ph.D. degree in Electrical and Computer Engineering all from the University of Illinois at Urbana-Champaign in 2007, 2008, and 2010, respectively.

His research lies in cyberphysical systems, in particular: distributed estimation, optimization and control in wireless and sensor networks, data mining/machine learning, transportation networks, as well as sparse sampling. Dr. Freris has published in several top-tier journals and conferences on Electrical Engineering, Computer Science and Applied Mathematics, held by IEEE, ACM and SIAM. His research was recognized with two IBM invention achievement awards, a Vodafone fellowship and the Gerondelis foundation award. Previously, Dr. Freris was a senior researcher in the School of Computer and Communication Sciences at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, where he was the project manager of a long-standing collaboration with Qualcomm. From 2010-2012, he was a postdoctoral researcher in IBM Research – Zurich, Switzerland, where he was involved in a 5-year ERC project on Big Data, as well as the IBM Operations Research Group. During his graduate years, he also worked as a research intern in Deutsche Telekom and Xerox Research labs. Dr. Freris is a member of IEEE, SIAM and ACM.

Tue Sept. 29 – Exploit Social Trust for Cooperative Networking: A Social Group Utility Maximization Framework

Speaker: Dr. Xu Chen, University of Goettingen, Germany

Location and Time: FIT 1-222, Tue Sept. 29, 2015, from 11:00am-12:00pm

Abstract: The combination of exploding demand and limited resources poses a significant challenge for future wireless network design. Since hand-held devices are carried by human beings, we advocate a social aware approach to enhance cooperative networking. Such cooperation among mobile devices with trust enables self-organizing networking, and has potential to achieve substantial gains in spectral efficiency and lead to significant increases in network capacity. In particular, mobile devices are coupled in the physical domain due to the interference relationship in data transmissions, and also coupled in social domain due to the social ties among them. It would be a win-win case for these devices to help those users having social trust with them. With this insight, we propose a novel social group maximization framework for cooperative networking, where each user carries out resource allocation to maximize its social group utility, defined as the weighted sum of its own utility and the utilities of other users having social trust towards it. Through varieties of wireless networking applications, we demonstrate that the social group utility maximization framework can provide rich modeling flexibility for cooperative networking.

Biography: Dr. Xu Chen received the Ph.D. degree from the Chinese University of Hong Kong in 2012. From 2012 to 2014, Dr. Chen was a postdoctoral research fellow with Arizona State University, Tempe, USA. Since April 2014, he has joined the Faculty of Mathematics and Computer Science, University of Goettingen, Germany, as a Humboldt Scholar. Dr. Chen has published more than 40 scientific papers in internationally recognized journals (e.g., ACM/IEEE TON, JSAC and TMC) and top conferences (e.g., INFOCOM, MOBIHOC and ICDCS). Dr. Chen is the recipient of the prestigious Humboldt research fellowship awarded by Alexander von Humboldt-Foundation, the 2014 Hong Kong Young Scientist Award Runner-Up by Hong Kong Institution of Science, the Best Paper Runner-Up Award in IEEE International Conference on Computer Communications (INFOCOM 2014), and the Honorable Mention Award in IEEE International Conference on Intelligence and Security Informatics (ISI 2010). Dr. Chen is an Associate Editor of EURASIP Journal on Wireless Communications and Networking, guest editor of International Journal of Big Data Intelligence, the special track co-chair of ISVC’15, publicity co-chair of NetGCoop’14, and serves as a technical program committee (TPC) member for many leading conferences including ACM MOBIHOC, IEEE GLOBECOM, ICC, and WCNC.

Spring 2015

Student organizer – Yue Yu

Wed July 1 – On the Strategic Diffusion over Social Networks

Speaker: Prof. Yung Yi, KAIST

Location and Time: FIT 1-312, Wed July 1, 2015, from 10:00am-11:00am

Abstract: A variety of models have been proposed and analyzed to understand how a new innovation (e.g., a technology, a product, or even a behavior) diffuses over a social network, where it was mainly assumed in literature that the new innovation spreads just like a kind of epidemic process. In this talk, we consider a different diffusion model — a game-based model, where each individual makes a selfish, rational choice in terms of its payoff in adopting the new innovation. We first discuss how this strategic diffusion occurs when people are either non-progressive or progressive, and then talk about how to speed up the diffusion process via appropriate seeding of some individuals with a given budget. Our analysis is made for various representative social network topologies, such as Erdos-Rényi, planted partition graphs, and power-law graphs, which we believe draws useful implications in practice. Part of this talk was and will be presented at ACM Sigmetrics 2014 and IEEE Infocom 2015.

Bio: Yung Yi received his B.S. and the M.S. in the School of Computer Science and Engineering from Seoul National University, South Korea in 1997 and 1999, respectively, and his Ph.D. in the Department of Electrical and Computer Engineering at the University of Texas at Austin, USA in 2006. From 2006 to 2008, he was a post-doctoral research associate in the Department of Electrical Engineering at Princeton University. Now, he is an associate professor at the Department of Electrical Engineering at KAIST, South Korea. His current research interests include the design and analysis of computer networking and wireless communication systems, economic aspects of communication networks (aka network economics), and social networks. He was the recipient of two best paper awards at IEEE SECON 2013 and ACM Mobihoc 2013. He is now an associate editor of IEEE/ACM Transactions on Networking, Journal of Communication Networks, and Elsevier Computer Communications Journal.

Wed July 1 – Bringing Performance to the Cloud

Speaker: Prof. Dongsu Han, KAIST

Location and Time: FIT 1-312, Wed July 1, 2015, from 11:00am-12:00pm

Abstract: Cloud computing has been tremendously successful in providing the scalability needed for popular Internet-based services such as Facebook, Google, and Netflix. Popular Internet-based services that we use everyday rely on a public or private cloud platform to provide their services world-wide. To scale out the system performance, these services utilize hundreds of thousands to millions of servers that cost billions of dollars. Effectively utilizing individual resources in the cloud becomes crucial in reducing the cost (and the energy consumption) at this scale. In this talk, we look at how to improve the performance of a single machine within the cloud. One of the key problems that limits the performance is that applications and networking stacks are not design to scale well in multicore environments. Modern machines have tens of cores and multiple 10Gbps Ethernet links. However, existing designs cannot effectively utilize these resources. In this talk, we focus on improving the performance of two essential building blocks, an in-memory key-value store and the TCP stack, that Internet-based services commonly rely on. By employing multi-core aware designs and by-passing the OS kernel to eliminate its overhead, we show that we can dramatically increase their performances by up to 13.5x and 3x respectively. * This is a joint work with H. Lim (CMU), M. Kaminsky (Intel), D. Andersen (CMU), E. Jeong, K. Park (KAIST), and students at KAIST. Both studies appeared at NSDI 2014, one of them winning the “Community Award”.

Bio: Dongsu Han is an assistant professor at KAIST in the Department of Electrical Engineering and the Graduate School of Information Security. He received his Ph.D. from the Computer Science Department at Carnegie Mellon University in 2012. His research interests include Internet architectures, cloud and distributed systems, and Internet content delivery. As part of this thesis work, he worked on a new Internet architecture called XIA, one of the major, still on-going future Internet architecture projects funded by the NSF.

Tuesday June 9 – An Optimal and Distributed Method for Voltage Regulation in Power Distribution Systems

Speaker: Prof. Albert Lam, HKBU

Location and Time: FIT 1-222, Tuesday June 9, 2015, from 10:30am-11:30am

Abstract: The problem of voltage regulation is addressed in power distribution networks with deep-penetration of distributed energy resources, e.g., renewable-based generation, and storage-capable loads such as plug-in hybrid electric vehicles. The problem is cast as an optimization program, where the objective is to minimize the losses in the network subject to constraints on bus voltage magnitudes, limits on active and reactive power injections, transmission line thermal limits and losses. Sufficient conditions are provided, under which the optimization problem can be solved via its convex relaxation. Data from existing networks show that these sufficient conditions are expected to be satisfied by most networks. An efficient distributed algorithm is provided to solve the problem. The algorithm adheres to a communication topology described by a graph that is the same as the graph describes the el ectrical network topology.

Bio: Albert Lam received the BEng degree (First Class Honors) in Information Engineering from The University of Hong Kong (HKU), Hong Kong, in 2005, and obtained the PhD degree at the Department of Electrical and Electronic Engineering of HKU in 2010. He was a postdoctoral scholar at the Department of Electrical Engineering and Computer Sciences of University of California, Berkeley, in 2010-12. He is a Croucher research fellow and now a research assistant professor at the Department of Computer Science of Hong Kong Baptist University, Hong Kong. His research interests including optimization theory and algorithms, evolutionary computation, smart grid, and smart city planning.

Friday May 15 – Energy Coupon: A Mean Field Game Perspective on Demand Response in Smart Grids

Speaker: Mr. Jian Li, Ph.D. Candidate, TAMU

Location and Time: FIT 1-312, Friday Friday May 15, 2015, from 10:30am-11:30am

Abstract: We consider the problem of a Load Serving Entity (LSE) trying to reduce its exposure to electricity market volatility by incentivizing demand response in a Smart Grid setting. We focus on the day-ahead electricity market, wherein the LSE has a good estimate of the statistics of the wholesale price of electricity at different hours in the next day, and wishes its customers to move a part of their power consumption to times of low mean and variance in price. Based on the time of usage, the LSE awards a differential number of “Energy Coupons” to each customer in proportion to the customer’s electricity usage at that time. A lottery is held periodically in which the coupons held by all the customers are used as lottery tickets.

Our study takes the form of a Mean Field Game, wherein each customer models the number of coupons that each of its opponents possesses via a distribution, and plays a best response pattern of electricity usage by trading off the utility of winning at the lottery versus the discomfort suffered by changing its usage pattern. The system is at a Mean Field Equilibrium (MFE) if the number of coupons that the customer receives is itself a sample drawn from the assumed distribution. We show the existence of an MFE, and characterize the mean field customer policy as having a multiple-threshold structure in which customers who have won too frequently or infrequently have low incentives to participate. We then numerically study the system with a candidate application of air conditioning during the summer months in the state of Texas. Besides verifying our analytical results, we show that the LSE can potentially attain quite substantial savings using our scheme. Our techniques can also be applied to resource sharing problems in other societal networks such as transportation or communication.

Bio: Jian Li is a Ph.D student in the department of Electrical and Computer Engineering at Texas A&M University. He received a B.E. in Electronic Engineering from Shanghai Jiao Tong University in June, 2012. His research interests include model and analysis of communication networks and social networks, network economics, game theory, queueing games, optimization and algorithms.

Friday May 11 – Fast-Converging Distributed Optimization for Networked Systems: A Second-Order Approach

Speaker: Prof. Jia Liu, OSU

Location and Time: FIT 1-312, Friday Friday May 11, 2015, from 2pm-3pm

Abstract: The fast growing scale and heterogeneity of modern wired/wireless communications networks necessitate the design of distributed congestion control and routing optimization algorithms. To date, however, most of the existing schemes are based on a key idea called the back-pressure algorithm. Despite having many salient features, the first-order subgradient nature of the back-pressure based schemes results in slow convergence and hence poor delay performance. To overcome these limitations, in this research, we make a first attempt at developing a second-order joint congestion control and routing optimization framework that offers utility-optimality, queue-stability, and fast convergence. Our results of this research are three-fold: i) we propose a new second-order joint congestion control and routing framework based on a primal-dual interior-point approach; ii) we establish utility-optimality and queue-stability of the proposed second-order method; and iii) we show how to implement the proposed second-order method in a distributed fashion. Collectively, our results contribute to the development of an analytical foundation for networked systems design that provides second-order convergence speed.

Bio: Jia (Kevin) Liu received his Ph.D. degree in Electrical and Computer Engineering from Virginia Tech in 2010. He then joined The Ohio State University as a Postdoctoral Researcher in the Department of Electrical and Computer Engineering. Since November 2014, he has been promoted to Research Assistant Professor in the ECE department at OSU. His current research interests are in the areas of cross-layer optimization of wired/wireless communications networks, cyber-physical systems, smart electric power grid, and big data analytics. Prior to joining Virginia Tech, Dr. Liu was with Bell Labs, Lucent Technologies in Beijing China, working on 3G wireless standards development. Dr. Liu is a member of IEEE and SIAM. He was a recipient of the Best Paper Award of IEEE ICC 2008, a recipient of the Best Paper Runner-up Award of IEEE INFOCOM 2011, and a recipient of the Best Paper Runner-up Award of IEEE INFOCOM 2013. He is a recipient of China National Award for Outstanding Ph.D. Students Abroad in 2008.

Friday April 24 – Distributed Stochastic Optimization and Games via Correlated Scheduling

Speaker: Prof. Michael Neely, USC

Location and Time: FIT 1-312, Friday April 24, 2015, from 10:00am-11:00am

Abstract: This talk considers a system with multiple devices that make repeated decisions based on their own observed events. The events and decisions at each time step determine the values of a utility function and a collection of penalty functions. In the first part of the talk, the goal is to make distributed decisions over time to maximize time average utility subject to time average constraints on the penalties. An example is a collection of power constrained sensors that repeatedly report their own observations to a fusion center. Maximum utility is fundamentally reduced because devices do not know the events observed by others. Optimality is characterized for this distributed context. It is shown that optimality is achieved by correlating device decisions through a commonly known pseudorandom sequence. An optimal algorithm is developed that chooses pure strategies at each time step based on a set of time-varying weights.

In the second part of the talk, a related problem is cast in a dynamic game setting. Devices decide whether or not to share information, and will only do so if such sharing does not sacrifice their competitive advantage. Standard Nash equilibrium concepts are inadequate in this scenario. Instead, a new “no regret” goal is introduced and solved for arbitrary event sequences and arbitrary human decision sequences.

Papers on these topics are found here:

  1. http:ee.usc.edustochastic-netsdocs/distributed-optimization-ton.pdf

  2. http:arxiv.orgabs1412.8736

Bio: Michael J. Neely received B.S. degrees in both Electrical Engineering and Mathematics from the University of Maryland, College Park, in 1997. He was then awarded a 3 year Department of Defense NDSEG Fellowship for graduate study at the Massachusetts Institute of Technology, where he received an M.S. degree in 1999 and a Ph.D. in 2003, both in Electrical Engineering. He joined the faculty of Electrical Engineering at the University of Southern California in 2004, where he is currently an Associate Professor. His research interests are in the areas of stochastic network optimization and queueing theory, with applications to wireless networks, mobile ad-hoc networks, and switching systems. Michael received the NSF Career award in 2008, the Viterbi School of Engineering Junior Research Award in 2009, and the Okawa Foundation Research Grant Award in 2012. He is a member of Tau Beta Pi and Phi Beta Kappa.

Friday April 24 – Dynamic Service Migration and Workload Scheduling in Edge-Clouds

Speaker: Dr. Rahul Urgaonkar, IBM Research, TJ Watson

Location and Time: FIT 1-312, Friday April 24, 2015, from 11:00am-12:00pm

Abstract: Edge-clouds provide a promising approach to significantly improve network operational costs by moving computation closer to the network edge. A key challenge in such systems is to decide where and when services should be migrated in response to user mobility and demand variation. The objective is to optimize operational costs while providing rigorous performance guarantees. In this work, we model this as a sequential decision making problem using Markov Decision Process (MDP). However, departing from traditional solution methods (such as dynamic programming) that require extensive statistical knowledge and are computationally prohibitive, we develop a novel alternate methodology. First we establish an interesting decoupling property of the MDP that reduces it to two independent MDPs on disjoint state spaces. Then, using the technique of Lyapunov optimization over renewals, we design an online control algorithm for the decoupled problem that is provably cost-optimal. This algorithm does not require any statistical knowledge of the system parameters and can be implemented efficiently. We validate the performance of our algorithm using extensive trace-drive simulations. Our overall approach in general and can be applied to other MDPs that possess a similar decoupling property.

Bio: Rahul Urgaonkar is a Research Staff Member with the Cloud-Based Networks group at the IBM TJ Watson Research Center. He is currently a task lead on the US Army Research Laboratory (ARL) funded Network Science Collaborative Technology Alliance (NS CTA) program. He is also a Primary Researcher in the US/UK International Technology Alliance (ITA) research programs. His research is in the area of stochastic optimization, algorithm design and control with applications to communication networks and cloud-computing systems. Before joining IBM research, Rahul was a Scientist with the Network Research group at Raytheon BBN Technologies where he worked on several government funded projects, including the NS CTA and ITA programs. He obtained his Masters and PhD degrees from the University of Southern California in 2005 and 2011 respectively and his Bachelor’s degree (all in Electrical Engineering) from the Indian Institute of Technology Bombay in 2002.

Fall 2014

Student organizer – Xiaohong Hao

Dec 24 – Optimal Control of Wireless Networks: From Theory to Practice

Speaker: Prof. Eytan Modiano, MIT

Location and Time: FIT 1-312, Wednesday Dec 24, 2014, from 10:30am-12:00pm

Abstract: This talk reviews recent advances on network control for wireless networks with stochastic traffic and time-varying channel conditions. We start with a review of the seminal work of Tassiulas and Ephremides on optimal scheduling and routing, i.e., the now famous backpressure algorithm. Despite its theoretical promise, the optimal control strategy has not taken hold in practice; due, in part, to some of the modeling assumptions that fail to take into account practical considerations. Thus, we will discuss recent efforts to develop variants of backpressure that take into account practical considerations. These include efficient distributed scheduling algorithms, as well as new algorithms that take into account practical hardware and protocol limitations.

Bio: Eytan Modiano received his B.S. degree in Electrical Engineering and Computer Science from the University of Connecticut at Storrs in 1986 and his M.S. and PhD degrees, both in Electrical Engineering, from the University of Maryland, College Park, MD, in 1989 and 1992 respectively. He was a Naval Research Laboratory Fellow between 1987 and 1992 and a National Research Council Post Doctoral Fellow during 1992-1993. Between 1993 and 1999 he was with MIT Lincoln Laboratory where he was a project leader for MIT Lincoln Laboratory's Next Generation Internet (NGI) project. Since 1999 he has been on the faculty at MIT, where he is a Professor in the Department of Aeronautics and Astronautics and the Laboratory for Information and Decision Systems (LIDS). His research is on communication networks and protocols with emphasis on satellite, wireless, and optical networks. He is an Editor-at-Large for IEEEACM Transactions on Networking, and served as Associate Editor for IEEE Transactions on Information Theory, the IEEEACM Transactions on Networking, and the AIAA Journal of Aerospace Information Systems. He was the Technical Program co-chair for IEEE Wiopt 2006, IEEE Infocom 2007, and ACM MobiHoc 2007. He is a Fellow of the IEEE and an Associate Fellow of the AIAA.

Nov 28 – A dimensionality reduction approach for large-scale control problems

Speaker: Prof. Stefano Galelli, SUTD

Location and Time: FIT 1-222, Friday November 28, 10:30am-11:30am

Abstract: This talk introduces a dimensionality reduction approach for high-dimensional control problems, where the dimensionality of the state vector often prevents the application of standard control techniques. The approach relies on a variable (feature) selection approach that, starting from a dataset of one-step state transitions and rewards, identifies which state, control, and disturbance variables are most relevant for control purposes, and reduces the problem dimensionality by removing the others. The approach is demonstrated on the dimensionality reduction of a 3D hydrodynamic model used to simulate the temperature and salinity conditions in Marina Reservoir (Singapore). Results show that the proposed approach allows identifying an accurate, yet parsimonious, low-order model that is included within an optimal control scheme to support the operation of the reservoir.

About SUTD: The Singapore University of Technology and Design is Singapore's fourth autonomous university. It is established in collaboration with Massachusetts Institute of Technology. SUTD's mission is to advance knowledge and nurture technically grounded leaders and innovators to serve societal needs. This is accomplished with a focus on design, through an integrated multi-disciplinary curriculum and multi-disciplinary research. The university is rapidly expanding its education and research footprint and is home to several world-class research centers, including the MIT-SUTD International Design Center and the Lee Kwan Yew Center for Innovative Cities. The university will be moving to a new state of the art new campus in 2014.

Bio: Dr. Galelli is an assistant professor at the Singapore University of Technology and Design (SUTD), Pillar of Engineering Systems and Design. Dr. Galelli graduated in Environmental and Land Planning Engineering at Politecnico di Milano in 2007, and received a Ph.D. in Information and Communication Technology from the same university in early 2011. Before joining SUTD in mid-2013, he spent two years as Post-Doctoral Research Fellow at Singapore-Delft Water Alliance (National University of Singapore), where he was leading the Hydro-informatics group. He carries out research in systems analysis, data-driven modelling and optimization, particularly focusing on their application to water resources modelling and management. He is member of the IFAC Technical Committee TC8.3 on Modelling and Control of Environmental Systems, and he serves as reviewer for different international journals. He received the Environmental Modelling & Software 2011 Outstanding Reviewer Award and an Early Career Research Excellence Award (2014) by the international Environmental Modelling & Software society.

Nov 21 – Architecture of the Kubernetes Container Management System

Speaker: Tim Hockin, Google

Location and Time: MW S327, Friday November 21, 1:30pm-2:15pm

Abstract: Container management is a hot topic this year. Google's open-source Kubernetes system is inspired by Google’s experiences with internal management systems. These experiences have lead us to make a number of decisions that influence the whole Kubernetes architecture.

Google has been managing its workloads as micro-services in containers for more than 10 years, and now runs more than 2 billion containers per week. By decoupling application management from infrastructure management, containers facilitate more flexible, efficient, and transparent management of applications throughout their whole lifecycle. Managing the deployment and maintenance of containers at scale requires a robust ecosystem of tools.

Kubernetes is a new open source project inspired by Google’s internal workload management systems that establishes patterns and primitives for managing applications comprised of multiple containers across multiple hosts. The experiences that Google has accumulated have strongly influenced the design and architecture of Kubernetes.

From networking to active-management, Kubernetes is trying to advance the state of the art in container management. This talk will describe Kubernetes, its management primitives, and some of the design decisions that went into the system.

Bio: Dr. Tim is a Senior Staff Software Engineer at Google, where he works on containers, clusters, and related problems. He is one of the leads of the Kubernetes project.

Nov 21 – Containers Performance Management at Google and Beyond

Speaker: Victor Marmol, Google

Location and Time: MW S327, Friday November 21, 2:15pm-3:30pm

Abstract: Many companies are starting to deploy container-based applications at great scale throughout their infrastructures. As these deployments grow the need to measure and monitor the performance of the containers becomes clear. This then drives a need to optimize resource isolation and the management of machine resources. As applications grow, the needs for this management grow to groups of machines as well.

In the past year, Google has begun to open-source much of the knowledge and infrastructure it has built around containers. Starting with the release of lmctfy, Google has shown how it creates the container abstraction and how it envisions machines to be isolated. cAdvisor builds on top of that to introduce the monitoring and management of machine resources. Growing past the size of a single machine, Heapster allows to expand the monitoring and management of cAdvisor to groups of machines.

Bio: Victor is a Senior Software Engineer at Google. He is part of the containers infrastructure team which runs all of Google's compute jobs across the world; starting over 2 billion containers per week. Recently, he has begun open sourcing some of Google's containers infrastructure through two projects: lmctfy and cAdvisor. He is also a core maintainer of Docker's libcontainer and an active contributor of Google's Kubernetes. He has a bachelors in Computer Science and masters in Software Engineering from Carnegie Mellon University.

September 1 – Oil/gas Industrial Applications of State-of-the-art Modeling & Analytics

Speaker: Dr. Feilong Liu, Chevron

Location and Time: FIT 1-222, Monday September 1, 2:00pm - 3:00pm

Abstract: Oil/gas industry is a highly inter-disciplinary industry, and state-of-the-art Modeling & Analytics (including intelligent signal processing, pattern recognition, artificial intelligence, computational intelligence, machine learning, statistics, computer sciences, big data, data mining, control, visualization and optimization techniques) have been widely applied to improve work efficiency and help onsite engineers make better decisions.

In this talk, I will begin with the sky-high overview of oil industry, and point out some work efficiency difference between oil companies in China and in US. With that, I will then briefly introduce a couple of real high-value high-impact business problems within oil industry, which hopefully give you some ideas on what the oil company in US is doing. Thereafter, I will spend most of this talk to deep dive into waterflood optimization. As the name implies, waterflooding involves injecting water into an oil reservoir and driving the oil into the production well. Currently, waterflooding is responsible for   1/3 world daily oil production ( 27 million barrels per day). With only 1% improvement on current waterflooding oil production, it will lead to over $10 billion per year to the oil industry. This talk will share my experience on how we apply signal processing to build the waterflood connectivity model and optimization techniques to optimize water allocation target rate.

Hopefully, through this talk, you can gain some understanding of what oil companies is doing and how state-of-the-art Modeling & Analytics have been applied into oil industry to make profit.

Bio: Dr. Feilong Liu received his B.S. degree from Chinese Northeastern University, Shengyang, China, in 1995, M.S. degree from South China University of Technology, Guangzhou, China, in 2000, and Ph.D. degree from the University of Southern California (USC), Los Angeles, in 2008, all in electrical engineering. Immediately after his Ph.D graduation from USC, he joined Chevron where he starts to recognize many high-value high-impact business problems within oil industry. Since then, his professional interest shifts from publishing theoretical research papers into developing practical key technologies for real industrial problems. Currently, he is a senior optimization engineer at Chevron, and a recognized domain expert in Modeling & Analytics community at Chevron, where he is actively leading in applying signal processing, pattern recognition, artificial intelligence, computational intelligence, statistics, and optimization techniques to improve work efficiency and help onsite engineers make better decisions.

Dr. Liu is a recognized researcher in the field type-2 fuzzy logic, and a rising star in smart oilfield technologies. He is a senior member of IEEE and a member of the Society of Petroleum Engineers (SPE).

August 8 – Network Science and Linked Big Data

Speaker: Dr. Ching-Yung Lin, IBM T. J. Watson Research Center

Location and Time: FIT 1-222, Friday August 8, 10:30am - 11:30am

Abstract: In the Big Data era, data are linked and form large graphs. Traditional IT system was designed for processing independent data. Analyses are mostly done by considering i.i.d. scenario. Processing connected data has been a big challenge. From the scientific aspect, Network as a new inter-disciplinary scientific field is emerging. Entities – people, information, societies, nations, devices – connect to each other and form all kinds of intertwined networks. Researchers from multiple disciplines – electrical engineering, computer science, sociology, public health, economy, management, politics, laws, arts, physics, math, etc. – are interacting with each other to build up common grounds of network science. In this short talk, I am going to introduce our R&D work in Graph Computing and Network Science Analytics. Graph Computing is the “tool” for Network Science. It is for storing, processing, analyzing, and visualizing connected data. Based on Graph Computing tools, we could build up Cognitive Networks (such as Large-Scale Bayesian Networks, Deep Learning Tools, and Brain Network Analysis tools), Cognitive Analytics (such as visual sentiment and emotion analysis), Spatio-Temporal Analysis (for moving objects and IoT), and Behavioral Analysis. I will also quickly browse the solutions and applications that were built on these research foundations.

Bio: Ching-Yung Lin is the Manager of the Network Science and Big Data Analytics Department in IBM T. J. Watson Research Center. He is also an Adjunct Professor in Columbia University and New York University. His interest is mainly on fundamental research of large-scale multimodality signal understanding, network graph computing, and computational social & cognitive sciences, and applied research on security, commerce, and collaboration. Since 2011, he has been leading a team of more than 40 Ph.D. researchers in worldwide IBM Research Labs and more than 20 professors and researchers in 9 universities (Northeastern, Northwestern, Columbia, Minnesota, Rutgers, CMU, New Mexico, USC, and UC Berkeley). He is currently the Principal Investigator of three major Big Data projects: DARPA Anomaly Detection at Multiple Scales (ADAMS), DARPA Social Media in Strategic Communications (SMISC), and ARL Social and Cognitive Network Academic Research Center (SCNARC). He leads a major IBM R&D initiative on Linked Big Data called IBM System G. Dr. Lin was the first IEEE fellow elected for contribution to Network Science.