![]() ![]() The book mostly focuses on why it’s hard to align software with human values.Ĭhristian portrays this as mostly an old problem, whose importance is increasing. And – for today at least – the humans have lost the game. It’s hard to tell whether he has an opinion on how serious a threat unaligned AI will be – presumably it’s serious enough to write a book about?Ĭould the threat be more serious than that implies? Christian notes, without indicating his own opinion, that some people think so:Ī growing chorus within the AI community … believes, if we are not sufficiently careful, the this is literally how the world will end. Most of the book carefully avoids alarmist or emotional tones. Yet he organizes his discussion of near-term risks in ways that don’t pander to near-sighted concerns, and which nudge readers in the direction of wondering whether today’s mistakes represent the tip of an iceberg. ![]() Most writers with this focus miss the scale of catastrophe that could result from AIs that are smart enough to subjugate us.Ĭhristian mostly writes about problems that are visible in existing AIs. I was initially skeptical of Christian’s focus on problems with AI as it exists today. Book review: The Alignment Problem: Machine Learning and Human Values, by Brian Christian. ![]()
0 Comments
Leave a Reply. |