Keynote

Keynote I:


Prof. Xuemin Lin
University of New South Wales

Titile: "Big Graph Processing: Applications and Advances"

Bio:

Xuemin Lin is a UNSW distinguished Professor - Scientia Professor, and the head of database and knowledge research group in the school of computer science and engineering at UNSW. Xuemin is a distinguished visiting Professor at Tsinghua University and visiting Chair Professor at Fudan University. He is a fellow of IEEE. Xuemin's research interests lie in databases, data mining, algorithms, and complexities. Specifically, he is working in the areas of scalable processing and mining of large scale data, including graph, spatial-temporal, streaming, text and uncertain data. Xuemin currently serves as the editor-in-Chief of IEEE Transactions on Knowledge and Data Engineering (Jan 2017 - now). He was an associate editor of ACM Transactions Database Systems (2008-2014) and IEEE Transactions on Knowledge and Data Engineering (Feb 2013- Jan 2015), and an associate editor-in-Chief of IEEE Transactions on Knowledge and Data Engineering (2015-2016), respectively. He has been regularly serving as a PC member and area chairs/SPC in SIGMOD, VLDB, ICDE, ICDM, KDD, CIKM, and EDBT. He is a PC co-chair of ICDE2019 and VLDB2022.

Abstract:

Graph data are key parts of Big Data and widely used for modelling complex structured data with a broad spectrum of applications. Over the last decade, tremendous research efforts have been devoted to many fundamental problems in managing and analysing graph data. In this talk, I will focus on the three key problem, 1) efficiently computing subgraph mappings over large-scale graphs, 2) mining cohesive subgraphs, and 3) determining the resilience of graphs. I will cover applications and recent advantages.


Keynote II:


Prof. Qian Zhang
Hong Kong University of Science and Technology

Title: "Beyond Communication: Intelligent and Secure IoT Sensing Driven Design"

Bio:

Dr. Zhang joined Hong Kong University of Science and Technology (HKUST) in Sept. 2005 where she is now Tencent Professor of Engineering and Chair Professor of the Department of Computer Science and Engineering. She is also serving as the co-director of Huawei-HKUST innovation lab and the director of digital life research center of HKUST. Before that, she was in Microsoft Research Asia, Beijing, from July 1999, where she was the research manager of the Wireless and Networking Group. Dr. Zhang has published more than 400 refereed papers in international leading journals and key conferences. She is the inventor of more than 50 granted and 20 pending International patents. Her current research interests include Internet of Things (IoT), smart health, mobile computing and sensing, wireless networking, as well as cyber security. She is a Fellow of IEEE. Dr. Zhang has received MIT TR100 (MIT Technology Review) world’s top young innovator award. She also received the Best Asia Pacific (AP) Young Researcher Award elected by IEEE Communication Society in year 2004. She received the Best Paper Award in Multimedia Technical Committee (MMTC) of IEEE Communication Society in 2005 and Best Paper Award for QShine 2006, IEEE Globecom 2007, IEEE ICDCS 2008, IEEE ICC 2010, IEEE Globecom 2012, and IEEE ICC 2019. She received the Oversea Young Investigator Award from the National Natural Science Foundation of China (NSFC) in 2006. She held the Cheung Kong Chair Professor (长江讲座教授) in Huazhong University of Science and Technology (2012-2015). Dr. Zhang is serving as Editor-in-Chief of IEEE Trans. on Mobile Computing (TMC). She is a member of Steering Committee of IEEE Infocom. Dr. Zhang received the B.S., M.S., and Ph.D. degrees from Wuhan University, China, in 1994, 1996, and 1999, respectively, all in computer science.

Abstract:

The IoT ecosystem consists of three parties: Internet-of-Thing systems, physical world and adversaries. We believe that a well-functioning IoT system should satisfy three basic requirements. Firstly, IoT devices should be able to properly interact with and intelligently sense the physical world. Secondly, IoT devices should be able to identify other authentic IoT peers to enable device cooperation. Thirdly, IoT system should be robust against spoofing from adversaries. In this talk, I would like to share some of our recent efforts on intelligent and secure IoT sensing by examining the above requirements. Particularly, I would like to share some of work related to acoustic intensity based motion tracking, RF-based cross-domain gesture recognition, authentication for on-body IoT devices leveraging RF propagation features, as well as the spoofing feasibility in wearable ECG monitoring system. At the ending of my talk, I would like to point out some interesting future directions.


Keynote III:


Prof. Ness B. Shroff
The Ohio State University

Title: "Delay Optimality in Load Balancing Systems"

Bio:

Ness Shroff received the Ph.D. degree in electrical engineering from Columbia University in 1994. He joined Purdue University immediately thereafter as an Assistant Professor with the School of Electrical and Computer Engineering. At Purdue, he became a Full Professor of ECE and the director of a university-wide center on wireless systems and applications in 2004. In 2007, he joined The Ohio State University, where he holds the Ohio Eminent Scholar Endowed Chair in networking and communications, in the departments of ECE and CSE. He holds or has held visiting (chaired) professor positions at Tsinghua University, Beijing, China, Shanghai Jiaotong University, Shanghai, China, and IIT Bombay, Mumbai, India. He has received numerous best paper awards for his research and was listed in Thomson Reuters’ on The World’s Most Influential Scientific Minds, and has been noted as a Highly Cited Researcher by Thomson Reuters in 2014 and 2015. He has served on numerous editorial boards and chaired various major conferences and workshops. He currently serves as the steering committee chair for ACM Mobihoc, and Editor in Chief of the IEEE/ACM Transactions on Networking. He received the IEEE INFOCOM Achievement Award for seminal contributions to scheduling and resource allocation in wireless networks.

Abstract:

We are in the midst of a major data revolution. The total data generated by humans from the dawn of civilization until the turn of the new millennium is now being generated every other day. Driven by a wide range of data-intensive devices and applications, this growth is expected to continue its astonishing march, and fuel the development of new and larger data centers. In order to exploit the low-cost services offered by these resource-rich data centers, application developers are pushing computing and storage away from the end-devices and instead deeper into the data-centers. Hence, the end-users' experience is now dependent on the performance of the algorithms used for data retrieval, and job scheduling within the data-centers. In particular, providing low-latency services are critically important to the end-user experience for a wide variety of applications.

Our goal has been to develop the analytical foundations and methodologies to enable cloud storage and computing solutions that result in low-latency services. In this talk, I will focus on our efforts on reducing the latency through load balancing in large-scale data center systems. In our model each arrival is randomly dispatched to one of the servers with queue length below a threshold; if none exists, this arrival is randomly dispatched to one of the entire set of servers. We are interested in the fundamental relationship between the threshold and the delay performance of the system in heavy traffic. To this end, we first establish the following necessary condition to guarantee heavy-traffic delay optimality: the threshold will grow to infinity as the exogenous arrival rate approaches the boundary of the capacity region (i.e., the load intensity approaches one) but the growth rate should be slower than a polynomial function of the mean number of tasks in the system. As a special case of this result, we directly show that the delay performance of the popular pull-based policy Join-Idle-Queue (JIQ) lies strictly between that of any heavy-traffic delay optimal policy and that of random routing. We further show that a sufficient condition for heavy-traffic delay optimality is that the threshold grows logarithmically with the mean number of tasks in the system. We then extend our methodology to multiple dispatchers and develop fully distributed strategies that are heavy traffic delay optimal.


Keynote IV:


Prof. Zhi Tian
George Mason University

Title: "Communication Efficient Distributed Learning"

Bio:

Prof. Zhi Tian has been a Professor in the Electrical and Computer Engineering Department of George Mason University since 2015. Prior to that, she was on the faculty of Michigan Technological University from 2000 to 2014, and served as a Program Director at the National Science Foundation from 2012 to 2014. Her research interests lie in statistical signal processing, wireless communications, and decentralized network optimization and machine learning. She is an IEEE Fellow. She is Member-at-Large of the IEEE Signal Processing Society Board of Governors. She was General Co-Chair of the IEEE GlobalSIP Conference in 2016. She served as an IEEE Distinguished Lecturer, and Associate Editor for the IEEE Transactions on Wireless Communications and IEEE Transactions on Signal Processing. She received the IEEE Communications Society TCCN Publication Award in 2018.

Abstract:

In learning, a number of distributed nodes collaboratively carry out a common learning task in an autonomous manner, without sharing their private local raw data and often in the absence of centralized task coordination. In such a big-data paradigm, communication has become a common bottleneck in implementing efficient parallel and distributed algorithms, due to high latency and limited bandwidth of distributed networks. An ideal distributed algorithm is expected to reach the optimal solution with minimal communication and computation costs. Nevertheless, the communication-computation tradeoff is essential. This talk presents some recent results on the design and analysis of energy-efficient schemes for distributed learning, with the overarching strategy of transmitting the most informative messages only during iterative learning process. These communication-saving strategies are illustrated via several optimization and learning problems of broad applications.