Keynote Speakers


Dr. Carla E. Brodley
Professor and Dean,
College of Computer and Information Science,
Northeastern University, USA


Dr. Alan Edelman
Professor,
MIT's Computer Science and AI Lab,
Massachusetts Institute of Technology, USA


Dr. Satoshi Matsuoka
Professor,
Global Scientific Information and Computing Center,
Tokyo Institute of Technology, Japan


Dr. ChengXiang Zhai
Professor and Willett Faculty Scholar,
Department of Computer Science,
University of Illinois at Urbana-Champaign, USA


Dr. Jure Leskovec
Associate Professor of Computer Science,
Stanford University,
Chief Scientist at Pinterest, USA


Dr. John Langford
Principal Researcher,
Microsoft Research New York, USA

Human-in-the-loop Applied Machine Learning

Keynote PPT
Dr. Carla E. Brodley, Professor and Dean, Northeastern University, USA

Abstract: Machine learning research in academia is often conducted in vitro, divorced from motivating practical applications. As a result researchers often lose the ability to ask the question: how can my human expert’s knowledge be used to best improve the machine learning outcome? In this talk, we present three motivating applications that all benefit from human-guided machine learning: systematic reviews for evidence-based medicine, generating maps of global land cover of the Earth from remotely sensed data, and finding lesions in the MRI’s of treatment resistant epilepsy patients. Our machine learning contributions span active learning, both supervised and unsupervised learning, and their combination with human input. The methods we created are applicable to a wide range of applications in science, medicine and business.

Carla E. Brodley is the Dean of the College of Computer and Information Science at Northeastern University. Prior to joining Northeastern, she was a professor of the Department of Computer Science and the Clinical and Translational Science Institute at Tufts University (2004-2014). Before joining Tufts she was on the faculty of the School of Electrical Engineering at Purdue University (1994-2004).
A Fellow of the ACM and AAAI, Dean Brodley’s interdisciplinary machine learning research led to advances not only in computer and information science, but in many other areas including remote sensing, neuroscience, digital libraries, astrophysics, content-based image retrieval of medical images, computational biology, chemistry, evidence-based medicine, and predictive medicine.
Dean Brodley’s numerous leadership positions in computer science as well as her chosen research fields of machine learning and data mining include serving as program co-chair of ICML, co-chair of AAAI, and serving as associate editor of the Journal of AI Research, and the Journal of Machine Learning Research. She has previously served on the Defense Science Study Group, the board of the International Machine Learning Society, the AAAI Council and DARPA’s Information Science and Technology (ISAT) Board. She is currently serving on the CRA Board of Directors, the executive committee of the Northeast Big Data Hub, and as a member-at-large of the section on Information, Computing, and Communication of AAAS.

A More Open Efficient Future for AI Development and Data Science with an Introduction to Julia

Dr. Alan Edelman, Professor of Applied Mathematics, MIT, USA

Abstract: We propose a more open, efficient, expressive, and ergonomic future for AI development, machine learning, and data science based on the Julia programming language. Our thesis is that the current tapestry of high level codes with library calls creates programmer indirections that can work well for the "one off", but can slow general progress. We provide examples from Machine Learning, Automatic Differentiation, and Data Handling Technologies.

Alan Edelman is Professor of Applied Mathematics, and member of MIT's Computer Science and AI Lab. He has received many prizes for his work on mathematics and computing, and is a founder of Interactive Supercomputing, Inc. and Julia Computing, Inc. He received the B.S. and M.S. degrees in mathematics from Yale in 1984, and the Ph.D. in applied mathematics from MIT in 1989 under the direction of Lloyd N. Trefethen. Edelman's research interests include Julia, high-performance computing, numerical computation, linear algebra and random matrix theory. He has consulted for Akamai, IBM, Pixar, and NKK Japan among other corporations.

Being "BYTES-oriented" in HPC leads to an Open Big Data/AI Ecosystem and Further Advances into the Post-Moore Era

Dr. Satoshi Matsuoka, Professor, Tokyo Institute of Technology, Japan

Abstract: With rapid rise and increase of Big Data and AI as a new breed of high-performance workloads on supercomputers, we need to accommodate them at scale, traditional simulation-based HPC and BD/AI will converge. Our TSUBAME3 supercomputer at Tokyo Institute of Technology became online in Aug. 2017, and became the greenest supercomputer in the world on the Green 500 ranking at 14.11 GFlops/W; the other aspect of TSUBAME3, is to embody various Data or "BYTES-oriented" features to allow for HPC to BD/AI convergence at scale, including significant scalable horizontal bandwidth as well as support for deep memory hierarchy and capacity, along with high flops in low precision arithmetic for deep learning. Furthermore, TSUBAM3's technologies will be commoditized to construct one of the world’s largest BD/AI focused and "open-source" cloud infrastructure called ABCI (AI-Based Bridging Cloud Infrastructure), hosted by AIST-AIRC (AI Research Center), the largest public funded AI research center in Japan. The performance of the machine is slated to be several hundred AI-Petaflops for machine learning; the true nature of the machine however, is its BYTES-oriented, optimization acceleration in the memory hiearchy, I/O, the interconnect etc, for high-performance BD/AI. ABCI will be online Spring 2018 and its archiecture, software, as well as the datacenter infrastructure design itself will be made open to drive rapid adoptions and improvements by the community, unlike the concealed cloud infrastructures of today. Finally, transcending from FLOPS-centric mindset to being BYTES-oriented will be one of the key solutions to the upcoming "end-of-Moore's law" in the mind 2020s, upon which FLOPS increase will cease and BYTES-oriented advances will be the new source of performance increases over time in general for any compputing.

Satoshi Matsuoka has been a Full Professor at the Global Scientifi Information and Computing Center (GSIC), a Japanese national supercomputing center hosted by the Tokyo Institute of Technology, and since 2016 a Fellow at the AI Research Center (AIRC), AIST, the largest national lab in Japan, as well as becoming the head of the joint Lab RWBC-OIL (Open Innovation Lab on Real World Big Data Computing) between the two institutions, in 2017. He received his Ph. D. from the University of Tokyo in 1993. He is the leader of the TSUBAME series of supercomputers, including TSUBAME2.0 which was the first supercomputer in Japan to exceed Petaflop performance and became the 4th fastest in the world on the Top500 in Nov. 2010, as well as the recent TSUBAME-KFC becoming #1 in the world for power efficiency for both the Green 500 and Green Graph 500 lists in Nov. 2013, and recently No.1 on the Green500 for the latest TSUBAME3 supercomputer. He is also currently leading several major supercomputing research projects, such as the MEXT Green Supercomputing, JST-CREST Extreme Big Data, and Co-PIs in several other HPC and BD/AI convergence projects. He has written over 500 articles according to Google Scholar, and chaired numerous ACM/IEEE conferences, most recently the overall Technical Program Chair at the ACM/IEEE Supercomputing Conference (SC13) in 2013. He is a fellow of the ACM and European ISC, and has won many awards, including the JSPS Prize from the Japan Society for Promotion of Science in 2006, awarded by his Highness Prince Akishino, the ACM Gordon Bell Prize in 2011, the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2012, and recently the 2014 IEEE-CS Sidney Fernbach Memorial Award, the highest prestige in the field of HPC.

TextScope: Enhance Human Perception via Text Mining

Keynote PPT
Dr. ChengXiang Zhai, Professor, University of Illinois at Urbana-Champaign, USA

Abstract: Recent years have seen a dramatic growth of natural language text data (e.g., web pages, news articles, scientific literature, emails, enterprise documents, blog articles, forum posts, product reviews, and tweets). Text data contain all kinds of knowledge about the world and human opinions and preferences, thus offering great opportunities for analyzing and mining vast amounts of text data ("big text data") to support user tasks and optimize decision making in all application domains. However, computers cannot yet accurately understand unrestricted natural language; as such, involving humans in the loop of interactive text mining is essential. In this talk, I will present the vision of TextScope, an interactive software tool to enable users to perform intelligent information retrieval and text analysis in a unified task-support framework. Just as a microscope allows us to see things in the ìmicro world,î and a telescope allows us to see things far away, the envisioned TextScope would allow us to ìseeî useful hidden knowledge buried in large amounts of text data that would otherwise be unknown to us. As examples of techniques that can be used to build a TextScope, I will present some general statistical text mining algorithms that we have recently developed for joint analysis of text and non-text data to discover interesting patterns and knowledge. I will conclude the talk with a discussion of the major challenges in developing a TextScope and some important directions for future research in text data mining.

ChengXiang Zhai is a Professor of Computer Science and a Willett Faculty Scholar at the University of Illinois at Urbana-Champaign (UIUC), where he is also affiliated with School of Information Sciences, Carl R. Woese Institute for Genomic Biology, and Department of Statistics. He received a Ph.D. in Computer Science from Nanjing University in 1990, and a Ph.D. in Language and Information Technologies from Carnegie Mellon University in 2002. He worked at Clairvoyance Corp. as a Research Scientist and a Senior Research Scientist from 1997 to 2000. His research interests are in the general area of intelligent information systems, including specifically intelligent information retrieval, data mining, natural language processing, machine learning, and their applications. He has published over 200 papers in these areas and a textbook on text data management and analysis. He is the America Editor of Springerís Information Retrieval Book Series and an Associate Editor of BMC Medical Informatics and Decision Making, and previously served as an Associate Editor of ACM Transactions on Information Systems, Associate Editor of Elsevierís Information Processing and Management, Program Co-Chair of NAACL HLT 2007, ACM SIGIR 2009, and WWW 2015. He is an ACM Distinguished Scientist, and received a number of awards,such as ACM SIGIR Test of Time Paper Award (three times), the Presidential Early Career Award for Scientists and Engineers (PECASE), Alfred P. Sloan Research Fellowship, IBM Faculty Award, HP Innovation Research Award, UIUC Rose Award for Teaching Excellence, and UIUC Campus Award for Excellence in Graduate Student Mentoring.

Large-scale Graph Representation Learning

Keynote PPT
Dr. Jure Leskovec, Associate Professor, Stanford University, Chief Scientist at Pinterest, USA

Abstract: Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. However, traditionally machine learning approaches rely on user-defined heuristics to extract features encoding structural information about a graph. In this talk I will discuss methods that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. I will provide a conceptual review of key advancements in this area of representation learning on graphs, including random-walk based algorithms, and graph convolutional networks.

Jure Leskovec is Associate Professor of Computer Science at Stanford University and Chief Scientist at Pinterest. Computation over massive data is at the heart of his research and has applications in computer science, social sciences, economics, marketing, and healthcare. This research has won several awards including a Lagrange Prize, Microsoft Research Faculty Fellowship, the Alfred P. Sloan Fellowship, and numerous best paper awards. Leskovec received his bachelor's degree in computer science from University of Ljubljana, Slovenia, and his PhD in in machine learning from the Carnegie Mellon University and postdoctoral training at Cornell University.

Contextual Reinforcement Learning

Keynote PPT
Dr. John Langford, Principal Researcher, Microsoft Research New York, USA

Abstract: I will discuss a decade long research project to create the foundations of reinforcement learning with context (aka features). This research project has multiple threads including Contextual Bandits, Learning to Search, and Contextual Decision Processes. The most mature of these (Contextual Bandits) is now driving many real-world RL applications while the least mature (CDPs) is a fascinating theoretician’s toy.

John Langford is a machine learning research scientist, a field which he says "is shifting from an academic discipline to an industrial tool". He is the author of the weblog hunch.net and the principal developer of Vowpal Wabbit. John works at Microsoft Research New York, of which he was one of the founding members, and was previously affiliated with Yahoo! Research, Toyota Technological Institute at Chicago, and IBM's Watson Research Center. He studied Physics and Computer Science at the California Institute of Technology, earning a double bachelor's degree in 1997, and received his Ph.D. in Computer Science from Carnegie Mellon University in 2002. He was the program co-chair for the 2012 International Conference on Machine Learning.