Inaugural MPI-Beyond Workshop
Hadia Ahmed is a postdoctoral fellow at Lawrence Berkeley National Lab. She received her PhD in Computer Science from University of Alabama at Birmingham in August 2017 and then joined Dr. Scott Baden’s group at LBL where she is working on automated translations for generating PGAS applications. Hadia’s research focuses on program analysis and using source to source translations to improve parallel applications performance on supercomputers.
George Bosilca is a Research Director and Adjunct Assistant Professor at the Innovative Computing Laboratory at University of Tennessee, Knoxville. His research interests evolve around parallel and distributed algorithms and systems, and designing support for parallel applications to maximize their efficiency, portability, scalability, heterogeneity and resiliency at any scale and in any settings. Currently, he is actively involved the development and maintenance of projects such as Open MPI, ULFM, PaRSEC, DPLASMA, TESSE.
Ron Brightwell leads the Scalable System Software Department at Sandia National Laboratories. After joining Sandia in 1995, he was a key contributor to the high-performance interconnect software and lightweight operating system for the world’s first terascale system, the Intel ASCI Red machine. He was also part of the team responsible for the high-performance interconnect and lightweight operating system for the Cray Red Storm machine, which was the prototype for Cray’s successful XT product line. The impact of his interconnect research is visible in technologies available today from Atos/Bull, Intel, and Mellanox. He has also contributed to the development of the MPI-2 and MPI-3 specifications. He has authored more than 115 peer-reviewed journal, conference, and workshop publications. He is an Associate Editor for the IEEE Transactions on Parallel and Distributed Systems, has served on the technical program and organizing committees for numerous high-performance and parallel computing conferences, and is a Senior Member of the IEEE and the ACM.
Dr Daniel Holmes is an Applications Consultant in HPC Research at EPCC, the Supercomputing Centre at the University of Edinburgh, Scotland. Prior to completing his Ph.D. in 2012, Daniel worked as an IT professional in a wide variety of companies and industries as an expert consultant. Daniel specialises in improving the performance, usability, and scalability of parallel programming, with a particular focus on MPI and the interactions of MPI with other programming models. His interests include programming models for extreme parallelism and programming systems for increased programmer productivity. Currently, Daniel works on the EU Horizon 2020 funded EPiGRAM-HS project and is an active member of the MPI Forum including as working group leader and chapter committee chair.
Atsushi Hori, Ph.D. is a Senior Scientist of System Software Research Team at Riken Center for Computational Science. His current research interests include parallel operating system. He received B.S. and M.S. degrees from Waseda University, and received Ph.D. of Engineering from the University of Tokyo.
Martin Herbordt is Professor of Electrical and Computer Engineering at Boston University where he directs the Computer Architecture and Automated Design Lab. His research spans Architecture and High Performance Computing. He and his group have been working for many years in accelerating HPC applications with FPGAs and GPUs, and in building systems integrating FPGAs. More recently their focus has been on middleware and system aspects of large-scale FPGA clusters and clouds, the latter especially in Bump-in-the-Wire configurations.
Ken Raffenetti is a principal software development specialist at Argonne National Laboratory in the Programming Models and Runtime Systems (PMRS) group. He has been a main contributor to MPICH for over five years. Ken received a BS in Computer Science from the University of Illinois at Urbana-Champaign. Prior to joining PMRS, he was a Systems Administration Associate in the Mathematics and Computer Science division at Argonne.
Dr. Martin Schulz is a Full Professor and Chair for Computer Architecture and Parallel Systems at the Technische Universität München (TUM), which he joined in 2017. Prior to that, he held positions at the Center for Applied Scientific Computing (CASC) at Lawrence Livermore National Laboratory (LLNL) and Cornell University. He earned his Doctorate in Computer Science in 2001 from TUM and a Master of Science in Computer Science from UIUC. Martin has published over 200 peer-reviewed papers and currently serves as the chair of the MPI Forum, the standardization body for the Message Passing Interface. His research interests include parallel and distributed architectures and applications; performance monitoring, modeling and analysis; memory system optimization; parallel programming paradigms; tool support for parallel programming; power-aware parallel computing; and fault tolerance at the application and system level. Martin was a recipient of the IEEE/ACM Gordon Bell Award in 2006 and an R&D 100 award in 2011.
Dr. Anthony (Tony) Skjellum studied at Caltech (BS, MS, PhD). His PhD work emphasized portable, parallel software for large-scale dynamic simulation, with a specific emphasis on message-passing systems, parallel nonlinear and linear solvers, and massive parallelism. From 1990-93, he was a computer scientist at LLNL focusing on performance-portable message passing and portable parallel math libraries. From 1993-2003, he was on the faculty in Computer Science at Mississippi State University, where his group co-invented the MPICH implementation of the Message Passing Interface (MPI) together with colleagues at Argonne National Laboratory. From 2003-2013, he was professor and chair at the University of Alabama at Birmingham, Dept. of Computer and Information Sciences. In 2014, he joined Auburn University as Lead Cyber Scientist and led R&D in cyber and High-Performance Computing for over three years. In Summer 2017, he joined the University of Tennessee at Chattanooga as Professor of Computer Science, Chair of Excellence, and Director, SimCenter, where he continues work in HPC and Cybersecurity, with strong emphases on IoT and blockchain technologies. In particular two fault tolerant MPI implementations and models (that are not the same as ULFM :-) ) are work he is leading with collaborators and his students. He is a senior member of ACM, IEEE, ASEE, and AIChE, and an Associate Member of the American Academy of Forensic Science (AAFS), Digital & Multimedia Sciences Division.
Dr. Jeff Squyres is Cisco's representative to the MPI Forum standards body and is Cisco's core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990's, and is a chapter author of the MPI-2 and MPI-3 standards.
Jeff received both a BS in Computer Engineering and a BA in English Literature from the University of Notre Dame in 1994; he received a MS in Computer Science and Engineering from Notre Dame two years later in 1996. After some active duty tours in the military, Jeff received his Ph.D. in Computer Science and Engineering from Notre Dame in 2004. Jeff then worked as a Post-Doctoral research associate at Indiana University, until he joined Cisco in 2006.
In Cisco, Jeff is part of the VIC group (Virtual Interface Card, Cisco's virtualized server NIC) in the larger UCS server group. He works in designing and writing systems-level software for optimized network IO in HPC and other high-performance types of applications. Jeff also represents Cisco to several open source software communities and the MPI Forum standards body.
Dr. Michela Taufer is an ACM Distinguished Scientist and holds the Jack Dongarra Professorship in High Performance Computing in the Department of Electrical Engineering and Computer Science at the University of Tennessee, Knoxville (UTK). She earned her undergraduate degrees in computer engineering from the University of Padova (Italy) and her doctoral degree in computer science from the Swiss Federal Institute of Technology or ETH (Switzerland). From 2003 to 2004, she was a La Jolla Interfaces in Science Training Program (LJIS) Postdoctoral Fellow at the University of California, San Diego (UCSD) and The Scripps Research Institute (TSRI), where she worked on interdisciplinary projects in computer systems and computational chemistry.
Michela has a long history of interdisciplinary work with scientists. Her research interests include software applications and their advance programmability in heterogeneous computing (i.e., multi-core platforms and GPUs); cloud computing and volunteer computing; and performance analysis, modeling and optimization of multi-scale applications. She has been serving as the principal investigator of several NSF collaborative projects. She also has significant experience in mentoring a diverse population of students on interdisciplinary research. Michela's training expertise includes efforts to spread high-performance computing participation in undergraduate education and research as well as efforts to increase the interest and participation of diverse populations in interdisciplinary studies.
Dr. Geoffroy Vallee is the lead of the computer science team of the Research Software Engineering group in the Computer Science and Mathematics Division at the Oak Ridge National Laboratory (ORNL). Dr. Vallee also represents ORNL at the MPI and PMIx forums. He is also part of the ORNL’s Science Council. He is part of the team supporting MPI on the Summit system, as well as member of ECP team focusing on preparing Open MPI for exascale.
In the context of his work, he addresses challenges such as application optimization on large scale systems, MPI user support, improving the support of hybrid applications (MPI+X), doing research around programming languages such as MPI (e.g., what are the missing features for current and future systems?) and OpenMP (e.g., the rOpenMP project which aims at providing resilience for OpenMP), as well as investigating the benefits of using containers and distributed runtime systems for the execution of task-based workloads (in oppositions to pure MPI applications).