NVLogo wht bg v2
8249  Reviews star_rate star_rate star_rate star_rate star_half

Enhancing Data Science Outcomes With Efficient Workflow

Organizations analyze large amounts of tabular data to uncover insights, improve products and services, and achieve efficiency. For those enterprises that want to thrive in a rapidly changing...

Read More
$500 USD
Course Code NV-EDSOEW
Duration 1 day
Available Formats Classroom

Organizations analyze large amounts of tabular data to uncover insights, improve products and services, and achieve efficiency. For those enterprises that want to thrive in a rapidly changing environment, the ability to process big data quickly can often create the competitive edge needed to succeed. Because speed is of such critical importance, accelerating the data processing pipeline—and doing it in a way that maximizes hardware utility—can profoundly impact the productivity and outcomes of data science efforts.

This Deep Learning Institute (DLI) workshop will share how to create an end-to-end hardware-accelerated machine learning pipeline for large datasets. You’ll utilize NVIDIA RAPIDS™ and Dask to scale your data science workloads. This workshop will illustrate how the same process can be applied to other machine learning use cases. You’ll then learn how to speed up data engineering by avoiding hidden slowdowns and reduce model development time by maximizing hardware utility. Throughout the development process, you’ll use diagnostic tools to identify delays and learn to mitigate common pitfalls.

Skills Gained

  • Develop and deploy an accelerated end-to-end data processing pipeline for large datasets
  • Scale data science workflows using distributed computing
  • Perform DataFrame transformations that take advantage of hardware acceleration and avoid hidden slowdowns
  • Enhance machine learning solutions through feature engineering and rapid experimentation
  • Improve data processing pipeline performance by optimizing memory management and hardware utilization

Prerequisites

  • Basic knowledge of a standard data science workflow on tabular data.
  • Knowledge of distributed computing using Dask. To gain an adequate understanding, we recommend the “Get Started” guide from Dask.
  • Completion of the DLI’s "Fundamentals of Accelerated Data Science" course or an ability to manipulate data using cuDF and some experience building machine learning models using cuML.

Course Details

Course Outline

Introduction

Advanced Extract, Transform, and Load (ETL)

  • Learn to process large volumes of data efficiently for downstream analysis.
  • Discuss current challenges of growing data sizes.
  • Perform ETL efficiently on large datasets.
  • Discuss hidden slowdowns and perform DataFrame transformations properly
  • Discuss diagnostic tools to monitor and optimize hardware utilization.
  • Persist data in a way that’s conducive for downstream analytics.

Training on Multiple GPUs With PyTorch Distributed Data Parallel (DDP)

  • Learn how to improve data analysis on large datasets.
  • Build and compare classification models.
  • Perform feature selection based on predictive power of new and existing features.
  • Perform hyperparameter tuning.
  • Create embeddings using deep learning and clustering on embeddings.

Deployment

  • Learn how to deploy and measure the performance of an accelerated data processing pipeline.
  • Deploy a data processing pipeline with Triton Inference Server.
  • Discuss various tuning parameters to optimize performance.

Assessment and Q&A