Anders Sandberg discusses AI reasoning, the new OpenAI model o1 and it's reasoning capabilities.
0:00 Intro
0:08 What is interesting about GPT o1?
1:31 o1 scoring high on IQ tests
2:15 The 'g' factor: Human vs AGI?
7:34 Will the current LLMs lead to an AI takeoff?
8:52 AI has struggled with some problems
10:37 AIs methodology for problem solving is getting better
11:37 AI sanity checking it's logic?
12:26 AI self-correcting reasoning and the slope of the intelligence explosion
13:04 AI investment returns may not track AI capability gains
13:56 AI accelerating research & the problem of hallucination
18:23 LLMs, reinforcement learning - hybrid AI
20:09 AI agents and AI safety
21:43 Hidden chain of thought reasoning
23:24 OpenAI System Card and AI Safety (biological & persuasiveness risks)
26:24 Indirect Normativity, metaethics, moral realism & AI safety
33:08 Evaluating AI for safety - translating moral truths
37:11 AI, governance, and coordination problems
43:59 Global coordination & AI
45:44 Book 'Grand Futures' in development
46:53 Book 'Law, AI and Leviathan' coming soon
#Strawberry #OpenAI #AGI
Many thanks for tuning in!
Please support SciFuture by subscribing and sharing!
Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series?
Please fill out this form: https://docs.google.com/forms/d/1mr9P...
Kind regards,
Adam Ford
Science, Technology & the Future - #SciFuture - http://scifuture.org
Смотрите видео Is AI Reasoning Key to Superintellignece? - Anders Sandberg онлайн без регистрации, длительностью часов минут секунд в хорошем качестве. Это видео добавил пользователь Science, Technology & the Future 17 Сентябрь 2024, не забудьте поделиться им ссылкой с друзьями и знакомыми, на нашем сайте его посмотрели 33 раз и оно понравилось 2 людям.