Artificial Intelligence and its existential risks

2,126,354 vues 28,846 likes 2 years ago 03:17:51

About this podcast

Dive deep into the future of Artificial Intelligence with legendary researcher and philosopher Eliezer Yudkowsky on the Lex Fridman Podcast. In this profound conversation, Yudkowsky shares his critical insights on superintelligent AI, artificial general intelligence (AGI) alignment, and the potential existential risks AGI poses to humanity. Explore discussions around the implications of advanced AI models like GPT-4, the challenges of open-sourcing AGI, and crucial questions about consciousness, evolution, and the very timeline of AGI development. This episode is a must-listen for anyone interested in AI safety, the philosophical underpinnings of advanced AI, and the urgent considerations for humanity's future in the age of superintelligence.