Local GenAI LLMs with Ollama and Docker (Stream 262)

Published: 01 January 1970
on channel: Bret Fisher Cloud Native DevOps
6,032
170

👉 Edited version of this stream:    • Local GenAI LLMs with Ollama and Dock...  
Learn how to run your own local ChatGPT clone and GitHub Copilot clone by setting up Ollama and Docker's "GenAI Stack" to build apps on top of open source LLMs and closed-source SaaS models (GPT-4, etc.). Matt Williams is our guest to walk us through all the parts of this solution, and show us how Ollama can make it easier on Mac, Windows, and Linux to setup custom LLM stacks.

🗞️ Sign up for my weekly newsletter for the latest on upcoming guests and what I'm releasing: https://www.bretfisher.com/newsletter/

Matt Williams
============
  / technovangelist  
  / technovangelist  

Nirmal Mehta
============
  / nirmalkmehta  
  / normalfaults  
https://hachyderm.io/@nirmal

Bret Fisher

=========
  / bretefisher  
  / bretfisher  
https://www.bretfisher.com

Join my Community 🤜🤛
================
💌 Weekly newsletter on upcoming guests and stuff I'm working on: https://www.bretfisher.com/newsletter/
💬 Join the discussion on our Discord chat server   / discord  
👨‍🏫 Coupons for my Docker and Kubernetes courses https://www.bretfisher.com/courses/
🎙️ Podcast of this show https://www.bretfisher.com/podcast

Show Music 🎵
==========
waiting music: Jakarta - Bonsaye https://www.epidemicsound.com/track/Y...
intro music: I Need A Remedy (Instrumental Version) - Of Men And Wolves https://www.epidemicsound.com/track/z...
outro music: Electric Ballroom - Quesa https://www.epidemicsound.com/track/K...


Watch video Local GenAI LLMs with Ollama and Docker (Stream 262) online without registration, duration hours minute second in high quality. This video was added by user Bret Fisher Cloud Native DevOps 01 January 1970, don't forget to share it with your friends and acquaintances, it has been viewed on our site 6,032 once and liked it 170 people.