After we explored attacking LLMs, in this video we finally talk about defending against prompt injections. Is it even possible?
Buy my shitty font (advertisement): shop.liveoverflow.com
Watch the complete AI series:
• Hacking Artificial Intelligence
Language Models are Few-Shot Learners: https://arxiv.org/pdf/2005.14165.pdf
A Holistic Approach to Undesired Content Detection in the Real World: https://arxiv.org/pdf/2208.03274.pdf
Chapters:
00:00 - Intro
00:43 - AI Threat Model?
01:51 - Inherently Vulnerable to Prompt Injections
03:00 - It's not a Bug, it's a Feature!
04:49 - Don't Trust User Input
06:29 - Change the Prompt Design
08:07 - User Isolation
09:45 - Focus LLM on a Task
10:42 - Few-Shot Prompt
11:45 - Fine-Tuning Model
13:07 - Restrict Input Length
13:31 - Temperature 0
14:35 - Redundancy in Critical Systems
15:29 - Conclusion
16:21 - Checkout LiveOverfont
Hip Hop Rap Instrumental (Crying Over You) by christophermorrow
/ chris-morrow-3 CC BY 3.0
Free Download / Stream: http://bit.ly/2AHA5G9
Music promoted by Audio Library • Hip Hop Rap Instrumental (Crying Over...
=[ ❤️ Support ]=
→ per Video: / liveoverflow
→ per Month: / @liveoverflow
2nd Channel: / liveunderflow
=[ 🐕 Social ]=
→ Twitter: / liveoverflow
→ Streaming: https://twitch.tvLiveOverflow/
→ TikTok: / liveoverflow_
→ Instagram: / liveoverflow
→ Blog: https://liveoverflow.com/
→ Subreddit: / liveoverflow
→ Facebook: / liveoverflow
Watch video Defending LLM - Prompt Injection online without registration, duration hours minute second in high quality. This video was added by user LiveOverflow 11 May 2023, don't forget to share it with your friends and acquaintances, it has been viewed on our site 50,218 once and liked it 2.6 thousand people.