Having just about given up on finding a book on artificial intelligence ( AI ) and its various iterations that I could understand, I was pleasantly surprised to come across this little gem.
Well written and full of real-life anecdotes, it sets out to explain what algorithms can and cannot achieve at this stage.
Perhaps the most interesting chapter has to do with what the author calls the predictability-resilience paradox. As one can surmise, a predictable algorithm is one that can handle clearly-defined problems, and not much more; whereas a resilient algorithm is one that is, well, unpredictable as it takes on less clearly-defined issues.
A useful analogy is that of children to whom at a very early age we give a set of rules, only to see those rules “broken”, as they choose to explore life off the beaten track, based on their observations of what older people do. This presents challenges to the parents, and so it is with unruly algorithms.
At the end of the day, to quote the author: “Technology is most useful when it helps us solve the most creative problems we face as human beings. And using technology to solve these problems effectively will require us to move away from predictable systems”.
And yet, as several airplane accidents in recent years have shown, the willingness of pilots to put unreasonable trust in automated systems, occasionally results in tragedies that might have been averted, had the pilot been able to override the system or known what to do if the autopilot were disabled.
Of course, the greatest worry confronting society is the potential these various technologies have for literally taking over the world. So now you have movements starting, often backed by experts themselves, that would limit the reach of AI, for example, beyond certain limits. Indeed, I have a nephew who is a signatory to the Montreal Declaration for the Responsible Development of Artificial Intelligence.
Hosanagar, A Human’s Guide to Machine Intelligence, Viking