Book name - Human Compatible Author - Stuart Russell Host - Neha Prakash Review composed by - Sachin Gaur Narrated by - Clarion Kodamanchili Editor - Ishika Taneja Length - 2 minutes For more book reviews, visit innohealthmagazine.com
Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons
Future of Life Institute Podcast
Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons. Topics discussed in this episode include:-The current state of the deployment and development of lethal autonomous weapons and swarm technologies-Drone swarms as a potential weapon of mass destruction-The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons-The difficulty of attribution, verification, and accountability with autonomous weapons-Autonomous weapons governance as norm setting for global AI issuesYou can find the page for this podcast here: https://futureoflife.org/2021/02/25/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/You can check out the new lethal autonomous weapons website here: https://autonomousweapons.org/Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCTTimestamps: 0:00 Intro2:23 Emilia Javorsky on lethal autonomous weapons7:27 What is a lethal autonomous weapon?11:33 Autonomous weapons that exist today16:57 The concerns of collateral damage, accidental escalation, scalability, control, and error risk26:57 The proliferation risk of autonomous weapons32:30 To what extent are global superpowers pursuing these weapons? What is the state of industry's pursuit of the research and manufacturing of this technology42:13 A possible proposal for a selective ban on small anti-personnel autonomous weapons47:20 Lethal autonomous weapons as a potential weapon of mass destruction53:49 The unpredictability of autonomous weapons, especially when swarms are interacting with other swarms58:09 The risk of autonomous weapons escalating conflicts01:10:50 The risk of drone swarms proliferating01:20:16 The risk of assassination01:23:25 The difficulty of attribution and accountability01:26:05 The governance of autonomous weapons being relevant to the global governance of AI01:30:11 The importance of verification for responsibility, accountability, and regulation01:35:50 Concerns about the beginning of an arms race and the need for regulation01:38:46 Wrapping up01:39:23 OutroThis podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
SBM 034 | Human Compatible - Stuart Russell | Lesly Zerna
Science Book Movement - Notion360. Revisión Online del Libro: Human Compatible - Stuart Russell. Invitada: Lesly Zerna. Únete a nuestra comunidad en Discord a través del siguiente enlace: https://bookmovement.co/discord See acast.com/privacy for privacy and opt-out information.
Stuart Russell: Artificial Intelligence: A Modern Approach - Thoughts and Points
Attila on the World
In this video I will talk about the Artificial Intelligence: A Modern Approach book by Stuart Russell and Peter Norvig. This book was the introduction into Artificial Intelligence. AI is all around our lives and it will continue to be even more important in the future, so it's important to understand it. My playlist about AI: https://www.youtube.com/playlist?list=PL8k7NlvXa9ZmDp_a4XAJVG1jspQkIesgZ Twitter: https://twitter.com/AttilaonthWorld YouTube channel: https://www.youtube.com/channel/UCADpTO2CJBS7HNudJu9-nvg
Bernard Marr's Future of Business & Technology Podcast
In this podcast, I will be joined by UC Berkley professor Stuart Russell to explore the role of artificial intelligence in our world. As one of the world's leading thought leaders on the topic, we will talk about the latest AI innovations, the dangers that come with AI, as well as what this all means for us humans.
Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI
Future of Life Institute Podcast
Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. Topics discussed in this episode include:-The historical and intellectual foundations of AI -How AI systems achieve or do not achieve intelligence in the same way as the human mind-The rise of AI and what it signifies -The benefits and risks of AI in both the short and long term -Whether superintelligent AI will pose an existential risk to humanityYou can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/Timestamps: 0:00 Intro 4:30 The historical and intellectual foundations of AI 11:11 Moving beyond dualism 13:16 Regarding the objectives of an agent as fixed 17:20 The distinction between artificial intelligence and deep learning 22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind49:46 What changes to human society does the rise of AI signal? 54:57 What are the benefits and risks of AI? 01:09:38 Do superintelligent AI systems pose an existential threat to humanity? 01:51:30 Where to find and follow Steve and StuartThis podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Stuart Russell - Professor of Computer Science at the University of California, Berkeley joins Nihal to discuss his latest book ‘Human Compatible: Artificial Intelligence and the Problem of Control.’ They chat about how Hollywood has clouded our perception of AI, why we don’t need robots in human form and how the best jobs in the future will be ones that involve empathy. #Penguinpodcast ‘Human Compatible’ is available to buy as an audiobook now - https://apple.co/2UpZPkK / https://adbl.co/2W2muEY See acast.com/privacy for privacy and opt-out information.
118. Stuart Russell — Human Compatible: Artificial Intelligence and the Problem of Control
The Michael Shermer Show
In the popular imagination, superhuman artificial intelligence is an approaching tidal wave that threatens not just jobs and human relationships, but civilization itself. Conflict between humans and machines is seen as inevitable and its outcome all too predictable. In this groundbreaking book, distinguished AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to vastly accelerated scientific research, and outlines the AI breakthroughs that still have to happen before we reach superhuman AI. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage. If the predicted breakthroughs occur and superhuman AI emerges, we will have created entities far more powerful than ourselves. How can we ensure they never, ever, have power over us? Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. Such machines would be humble, altruistic, and committed to pursue our objectives, not theirs. This new foundation would allow us to create machines that are provably deferential and provably beneficial. Shermer and Russell also discuss: natural intelligence vs. artificial intelligence “g” in human intelligence vs. G in AGI (Artificial General Intelligence) the values alignment problem Hume’s “Is-Ought” naturalistic fallacy as it applies to AI values vs. human values regulating AI Russell’s response to the arguments of AI apocalypse skeptics Kevin Kelly and Steven Pinker the Chinese social control AI system and what it could lead to autonomous vehicles, weapons, and other systems and how they can be hacked AI and the hacking of elections, and what keeps Stuart up at night. Stuart Russell is a professor of Computer Science and holder of the Smith-Zadeh Chair in Engineering at the University of California, Berkeley. He has served as the Vice-Chair of the World Economic Forum’s Council on AI and Robotics and as an advisor to the United Nations on arms control. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. He is the author (with Peter Norvig) of the definitive and universally acclaimed textbook on AI, Artificial Intelligence: A Modern Approach. Listen to Science Salon via Apple Podcasts, Spotify, Google Play Music, Stitcher, iHeartRadio, and TuneIn.
Why we need to rethink the purpose of AI – A conversation with Stuart Russell
McKinsey on AI
Stuart Russell, one of the world’s foremost thought leaders on artificial intelligence, explains how we can ensure AI truly benefits humanity rather than causing us harm. According to Russell, doing so begins with abandoning the idea of creating “intelligent” machines altogether. Read more > Listen to the podcast (duration: 20:37) >
94 | Stuart Russell on Making Artificial Intelligence Compatible with Humans
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Artificial intelligence has made great strides of late, in areas as diverse as playing Go and recognizing pictures of dogs. We still seem to be a ways away from AI that is “intelligent” in the human sense, but it might not be too long before we have to start thinking seriously about the “motivations” and “purposes” of artificial agents. Stuart Russell is a longtime expert in AI, and he takes extremely seriously the worry that these motivations and purposes may be dramatically at odds with our own. In his book Human Compatible, Russell suggests that the secret is to give up on building our own goals into computers, and rather programming them to figure out our goals by actually observing how humans behave.Support Mindscape on Patreon.Stuart Russell received his Ph.D. in computer science from Stanford University. He is currently a Professor of Computer Science and the Smith-Zadeh Professor in Engineering at the University of California, Berkeley, as well as an Honorary Fellow of Wadham College, Oxford. He is a co-founder of the Center for Human-Compatible Artificial Intelligence at UC Berkeley. He is the author of several books, including (with Peter Norvig) the classic text Artificial Intelligence: A Modern Approach. Among his numerous awards are the IJCAI Computers and Thought Award, the Blaise Pascal Chair in Paris, and the World Technology Award. His new book is Human Compatible: Artificial Intelligence and the Problem of Control.Web pageGoogle Scholar publicationsWikipediaTalk on Provably Beneficial Artificial IntelligenceAmazon author page