Jailbreaking LLMs - Prompt Injection and LLM Security

Published: 01 January 1970
on channel: Mozilla Developer
2,915
93

Building applications on top of Large Language Models brings unique security challenges, some of which we still don't have great solutions for. Simon will be diving deep into prompt injection and jailbreaking, how they work, why they're so hard to fix and their implications for the things we are building on top of LLMs.Simon Willison is the creator of Datasette, an open source tool for exploring and publishing data. He currently works full-time building open source tools for data journalism, built around Datasette and SQLite.


Watch video Jailbreaking LLMs - Prompt Injection and LLM Security online without registration, duration hours minute second in high quality. This video was added by user Mozilla Developer 01 January 1970, don't forget to share it with your friends and acquaintances, it has been viewed on our site 2,915 once and liked it 93 people.