Arvind Narayanan is a professor of Computer Science at Princeton and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil and a big proponent of the AI scaling myths around the importance of just adding more compute. He is also the lead author of a textbook on the computer science of cryptocurrencies which has been used in over 150 courses around the world, and an accompanying Coursera course that has had over 700,000 learners.
-----------------------------------------------
Timestamps:
(00:00) Intro
(01:18) AI Hype vs. Bitcoin Hype: Similarities & Differences
(03:49) The Misalignment Between Compute & Performance
(08:10) Synthetic Data
(09:30) Creating Effective Agents Despite Incomplete Data
(12:00) Why Is the AI Industry Shifting Toward Smaller Models
(16:31) The Growing Gap Between AI Models & Compute Capabilities
(19:44) Predictions on the Timeline for AGI
(27:00) Policy Proposals for U.S. and European AI Regulation
(29:29) AI & Deepfakes: The Risk of Discrediting Real News
(35:59) Revolutionising Healthcare with AI in Your Pocket
(40:29) Is AI Job Replacement Fear Overhyped or Real?
(41:46) AI's Potential as a Weapon
(46:19) Quick-Fire Round
-----------------------------------------------
In Today’s Episode with Arvind Narayanan We Discuss:
1. Compute, Data, Algorithms: What is the Bottleneck:
Why does Arvind disagree with the commonly held notion that more compute will result in an equal and continuous level of model performance improvement?
Will we continue to see players move into the compute layer in the need to internalise the margin? What does that mean for Nvidia?
Why does Arvind not believe that data is the bottleneck? How does Arvind analyse the future of synthetic data? Where is it useful? Where is it not?
2. The Future of Models:
Does Arvind agree that this is the fastest commoditization of a technology he has seen?
How does Arvind analyse the future of the model landscape? Will we see a world of few very large models or a world of many unbundled and verticalised models?
Where does Arvind believe the most value will accrue in the model layer?
Is it possible for smaller companies or university research institutions to even play in the model space given the intense cash needed to fund model development?
3. Education, Healthcare and Misinformation: When AI Goes Wrong:
What are the single biggest dangers that AI poses to society today?
To what extent does Arvind believe misinformation through generative AI is going to be a massive problem in democracies and misinformation?
How does Arvind analyse AI impacting the future of education? What does he believe everyone gets wrong about AI and education?
Does Arvind agree that AI will be able to put a doctor in everyone’s pocket? Where does he believe this theory is weak and falls down?
-----------------------------------------------
Subscribe on Spotify:
https://open.spotify.com/show/3j2KMcZ...
Subscribe on Apple Podcasts:
https://podcasts.apple.com/us/podcast...
Follow Harry Stebbings on Twitter:
/ harrystebbings
Follow Arvind Narayanan on Twitter:
/ random_walker
Follow 20VC on Instagram:
/ 20vchq
Follow 20VC on TikTok:
/ 20vc_tok
Visit our Website:
https://www.20vc.com
Subscribe to our Newsletter:
https://www.thetwentyminutevc.com/con...
-----------------------------------------------
#20vc #harrystebbings #arvindnarayanan #princetonuniversity #ai #venturecapital #samaltman #alexwang #openai #computerscience #technology
Watch video Arvind Narayanan: AI Scaling Myths, The Core Bottlenecks in AI Today & The Future of Models | E1195 online without registration, duration hours minute second in high quality. This video was added by user 20VC with Harry Stebbings 28 August 2024, don't forget to share it with your friends and acquaintances, it has been viewed on our site 15,553 once and liked it 309 people.