New course with CircleCI: Automated Testing for LLMOps

Published: 24 January 2024
on channel: DeepLearningAI
2,933
63

Enroll now: https://bit.ly/47HoFza

In this course, you will learn how to create a continuous integration (CI) workflow to evaluate your LLM applications at every iteration for more efficient application development.

When building applications with generative AI, model behavior is less predictable than traditional software. Systematic testing can make a difference in saving you development time and cost.

CI, a key part of LLMOps, involves making small changes to software in development and testing them to identify bugs before they escalate, allowing faster and more cost-effective fixes, enabling teams to concentrate on building new features and facilitates quicker product iteration and delivery.

After completing this course, you will be able to:

Write robust LLM evaluations to cover common problems like hallucinations, data drift, and harmful or offensive output.
Build a continuous integration workflow to automatically evaluate every change to your application.
Orchestrate your CI workflow to run specific evaluations at different stages of development.

Elevate your LLM-based testing process and learn from instructor Rob Zuber, Chief Technology Officer (CTO) at CircleCI.

Learn more: https://bit.ly/47HoFza


Watch video New course with CircleCI: Automated Testing for LLMOps online without registration, duration hours minute second in high quality. This video was added by user DeepLearningAI 24 January 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 2,933 once and liked it 63 people.