Enroll now: https://bit.ly/4fe8azw
Addressing security and privacy in applications is vital. Applications built on LLMs pose special challenges, especially regarding private data.
Introducing Federated Learning, a two-part course series built in collaboration with Flower Labs, designed to help you learn how to use Flower, a popular open source framework, to build a federated learning system, and implement federated fine-tuning of LLMs with private data.
In the first course, called Intro to Federated Learning, you'll learn about the federated training process, how to tune and customize it, how to increase data privacy, and how to manage bandwidth usage in federated learning.
In the second course, Federated Fine-tuning of LLMs with Private Data, you’ll learn to apply federated learning to LLMs. You’ll explore challenges like data memorization and the computational resources required by LLMs, and explore techniques for efficiency and privacy enhancement, such as Parameter-Efficient Fine-Tuning (PEFT) and Differential Privacy (DP).
This two-part course series is self-contained. If you already know what federated learning is, you can start directly with part two of the course.
In detail, here’s what you’ll do in part one:
Learn how federated learning is used to train a variety of models, ranging from those for processing speech and vision all the way to the large language models, across distributed data while offering key data privacy options to users and organizations.
Learn how to train AI on distributed data by building, customizing, and tuning a federated learning project using Flower and PyTorch.
Gain intuition on how to think about Private Enhancing Technologies (PETs) in the context of federated learning, and work through an example using Differential Privacy, which protects individual data points from being traced back to their source.
Learn about two types of differential privacy - central and local - along with the dual approach of clipping and noising to protect private data.
Explore the bandwidth requirements for federated learning and how you can optimize it by reducing the update size and communication frequency.
In the second part, you’ll learn how to train powerful models with your own data in a federated way, called federated LLM fine-tuning:
Understand the importance of safely training LLMs using private data, and
Learn about the limitations of current training data and how Federated LLM Fine-tuning can help overcome these challenges.
Build an LLM that is fine-tuned with private medical data to answer complex questions, where you’ll see the benefits of federated methods when using private data.
Learn how federated LLM fine-tuning works and how it simplifies access to private data, reduces bandwidth with Performance-Efficient Fine-Tuning (PEFT), and increases privacy to training data with Differential Privacy.
Understand how LLMs can leak training data, how federated LLMs can lower this risk.
Learn more: https://bit.ly/4fe8azw
Watch video New course series with Flower Labs: Federated Learning online without registration, duration hours minute second in high quality. This video was added by user DeepLearningAI 24 July 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 2,666 once and liked it 69 people.