Credit: Google News
Data science and machine learning is expanding the boundaries of higher education. From classrooms to labs, professors, students, and researchers are utilizing artificial intelligence to develop new ways to accelerate research.
NVIDIA’s GPU Technology Conference (GTC), the premier AI and deep learning conference, features the latest scientific discoveries and opportunities to gain hands-on experience with powerful new research platform tools.
Whether you are a GTC veteran or a first-time attendee, you’ll have a chance to engage with the higher education and research community first-hand through meetups, round tables, and mixers to network with peers and NVIDIANs.
Hold the date for Higher Education breakouts at GTC March 18 – 21st San Jose, CA
To give you a preview of the innovative content available at this event, we’ve selected the top 10 sessions, in no particular order, led by academics and industry experts from leading universities around the world. Sign up for the higher education and research newsletter to stay informed about GTC and more!
High Performance Distributed Deep Learning: A Beginner’s Guide
Catch an overview of interesting trends in DL frameworks from an architectural performance standpoint and discover exciting opportunities for HPC and AI researchers to improve DNN training on NVIDIA GPUs.
Session Speakers: Dhabaleswar K (DK) Panda – Professor and University Distinguished Scholar, The Ohio State University; Hari Subramoni – Research Scientist, The Ohio State University; Ammar Ahmad Awan – PhD Student, The Ohio State University
Bringing State-of-the-Art GPU-Accelerated Molecular Modeling Tools to the Research Community
Researchers achieved their latest success with GPU acceleration of molecular simulation analysis tasks on the latest Volta and Turing GPUs with OpenACC parallel programming directives. Hear from researchers how GPUs -accelerated machine learning algorithms for tasks such as clustering of structures resulting from molecular dynamics simulations.
Session Speaker: John Stone – Senior Research Programmer, University of Illinois at Urbana-Champaign
Building a Distributed GPU DataFrame with Python
GPU Open Analytics Initiative, an effort to develop a GPU data frame that can handle a large-scale data-analytics workflow and support out-of-core cases in which the data is larger than GPU memory. Learn how to take this method and build custom distributed GPU computation by composing single-GPU libraries.
Session Speaker: Siu Kwan Lam – Software Engineer, Anaconda
AI + VR: The Future of Data Analytics
Standard data analytics tools and techniques are no longer sufficient. Learn how AI-powered visual analytics with immersive environments provide a novel and robust framework for collaborative data exploration and understanding
Session Speakers: Ciro Donalek – CTO, Co-Founder, Virtualitics Inc; Aakash Indurkhya – Head of Machine Learning Projects, Virtualitics
Accelerating the Next Generation of Seismic Interpretation
Researchers at The University of Texas are improving automatic seismic geobody interpretation. Understand their use of a convolutional neural networks for image classification and segmentation.
Session Speaker: Yunzhi Shi – Graduate Research Assistant, The University of Texas at Austin
AI in Astrophysics: Applying Artificial Intelligence and Deep Learning to Astronomical Research
Astronomers are adopting AI in astrophysics through new applications, including data analytics and numerical simulation. Delve into how a deep learning framework allows astronomers to identify and categorize astronomical objects in enormous datasets with increased accuracy.
Session Speaker: Brant Robertson – Associate Professor, UC Santa Cruz
Neural Network Designing New Drugs: The Rise of the Machines
New research in AI offers an opportunity to transform the pharmaceutical industry and dramatically accelerate the design of new drug candidates. Explore the unique proposition of AI’s ability to learn directly from past experience and capture hidden dependencies from both structured and unstructured data.
Session Speakers: Olexandr Isayev – Assistant Professor, University of North Carolina; Daniel Wakeland – Project Manager, Cvent
Deep Learning for Robotics
Deep learning is decreasing the need for time-consuming task-specific programming for robots. This talk will look at various techniques for training robotics including deep reinforcement learning, in apprenticeship learning, and in meta-learning for action.
Session Speaker: Pieter Abbeel – Professor, UC Berkeley / OpenAI / Gradescope
Demystifying Deep Learning Infrastructure Choices Using MLPerf Benchmark Suite
Dive into the new benchmark suite proposed by the deep learning community for machine learning workloads. Through quantitative analysis, understand the performance impact of NVIDIA GPUs along with different architectures and systems.
Session Speakers: Ramesh Radhakrishnan – Distinguished Engineer, Dell EMC; Lizy John – B. N. Gafford Professor, University of Texas
Exascale Deep Learning for Climate Analytics
Understand the true meaning of exascale through this session in which researchers share how they scaled training of a single deep learning model to 27,360 V100 GPUs with TensorFlow. Learn the importance of scale as data and models grow along with the necessary optimizations required to run at scale.
Session Speakers: Thorsten Kurth – Application Performance Specialist, Lawrence Berkeley National Laboratory; Josh Romero – Developer Technology Engineer, NVIDIA
A GPU-Accelerated Streaming AI Data Platform leveraging RAPIDS
Learn about the evolution of GPU-Accelerated data platforms for big data streaming and ETL analytics with an overview of the RAPIDS data science platform. This session contains benchmarks and best practices for running end-to-end big data workloads on GPUs from real life big data challenges.
Session Speaker: Joshua Patterson – Director, Applied Solution Engineering, NVIDIA
Go Hands on with Instructor Led-Workshops
In addition to all of the technical sessions, NVIDIA Deep Learning Institute (DLI) at GTC is a perfect opportunity to get hands-on experience with AI and accelerated computing. Here are some recommended workshops for educators:
Accelerating Data Science Workflows with RAPIDS
RAPIDS, an open source GPU acceleration platform for data science, creates possibilities for drastic performance gains and techniques not available through traditional CPU-only workflows. In this workshop, you’ll be able to refactor existing CPU-only data science workloads to run much faster on GPUs and write accelerated data science workflows from scratch.
Deep Learning for Robotics
AI is revolutionizing the acceleration and development of robotics across a broad range of industries. Explore how to create robotics solutions on a Jetson for embedded applications and deploy high-performance deep learning applications for robotics.
Fundamentals of Accelerated Computing with CUDA Python
Numba, the just-in-time, type-specializing Python function compiler, accelerates Python programs to run on massively parallel NVIDIA GPUs. Following this class, you’ll be able to use Numba to compile and launch CUDA kernels to accelerate your Python applications on NVIDIA GPUs.
With over 500 sessions and many other events, this is just a snapshot of all the amazing activities going on at GTC San Jose. Academic institutions already receive a 50% discount when they register with a university email address, you can get and additional 25% off using the code NVCHMARTIN before February 18th.
Credit: Google News