7952  Reviews star_rate star_rate star_rate star_rate star_half

Intermediate Generative AI Engineering for Data Scientists and ML Engineers

This Intermediate Generative AI (GenAI) course is for Machine Learning and Data Science professionals who want to learn advanced GenAI and LLM techniques like fine-tuning, domain adaptation, and...

Read More
Course Code WA3516
Duration 2 days
Available Formats Classroom

This Intermediate Generative AI (GenAI) course is for Machine Learning and Data Science professionals who want to learn advanced GenAI and LLM techniques like fine-tuning, domain adaptation, and evaluation. Participants learn how to leverage popular tools and frameworks, including Python, Hugging Face Transformers, and open-source LLMs.

Skills Gained

  • Develop and optimize prompts for improved LLM performance and output quality
  • Implement advanced techniques such as Retrieval Augmented Generation (RAG) and vector embeddings
  • Evaluate and compare LLM performance using appropriate metrics and benchmarks

Prerequisites

  • Practical experience (6+ months) minimum in Python - functions, loops, control flow
  • Data Science basics - NumPy, pandas, scikit-learn
  • Solid understanding of machine learning concepts and algorithms
  • Regression, Classification, Unsupervised learning (clustering, Neural Networks)
  • Strong foundations in probability, statistics, and linear algebra
  • Practical experience with at least one deep learning framework (e.g., TensorFlow or PyTorch) recommended
  • Familiarity with natural language processing (NLP) concepts and techniques, such as text preprocessing, word embeddings, and language models

Course Details

Outline

Introduction

Advanced Fine-Tuning and RAG Techniques

  • Advanced fine-tuning techniques for LLMs
  • Implementing Retrieval Augmented Generation (RAG)
  • Improving LLM output quality and relevance
  • Building a RAG-powered LLM application for a specific use case

Vector Embeddings and Semantic Search

  • Introduction to vector embeddings and their applications in NLP
  • Using vector embeddings for semantic search and recommendation systems
  • Generating vector embeddings from text data
  • Implementing a similarity search using libraries like Faiss or Annoy

LLM Optimization and Efficiency

  • Techniques for optimizing LLM performance
  • Quantization and pruning
  • Applying optimization techniques to reduce LLM model size and inference time
  • Strategies for efficient deployment and serving of LLMs in production

Ethical Considerations and Best Practices

  • Addressing biases and fairness issues in LLMs
  • Ensuring transparency and accountability in LLM-powered applications
  • Best practices for responsible AI development and deployment
  • Navigating privacy and security concerns when working with LLMs and sensitive data

Conclusion