AI Explained: The BonsXAI Blog

Debugging the Apocalypse: Can We Control the AI We Can't Understand?

 

 

Author: Peter Ball

 

I learned to program computers on a Digital Equipment Corporation PDP-11 with a copy of The C Programming Language by Brian Kernighan and Dennis Ritchie at my side. My primary strategy for solving programming problems was based on the notion that "All problems in computer science can be solved by another level of indirection.". Hence, C coding involved a lot of pointers to pointers, etc.  As do most novice programmers, I encountered Kernighan's Law before I heard of it: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it". You eventually learn to accommodate this conundrum or find something else to do.

 

Earlier this year, Yuval Harari wrote a short piece published by The Economist arguing that "AI has hacked the operating system of human civilization". He warns that AI threatens to destroy our civilization and calls for an FDA-like agency to "regulate AI before it regulates us". Well, there are many AI regulatory and governance initiatives underway. For an overview, see Katharina Koerner's blog post here. Are these initiatives adequate? Consider the complexity of evolving ecosystems of multi-agent AI systems: what does Kernighan's law say about the prospects for regulating that? When AI systems become smarter than humans at creating such systems (perhaps they already have), who is going to debug them? Per Kernighan's law, nobody can debug them. That's a problem.

 

 

 

I suggest that regulation of AI systems is necessary, but not sufficient, to prevent the apocalyptic demise of civilization Yuval Harari warns of. If, as he suggests, AI is a new weapon of mass destruction that can annihilate our mental and social world, shouldn't we engineer a culture immune to manipulation by ecosystems of uncontrollable AI? Can we really engineer culture? Sure, we can. One example is the United States of America, an engineered culture designed by the country's founders to provide "liberty and justice for all". Although it has been and remains flawed, it's been a functioning constitutional democratic republic. But only, as Ben Franklin said, "if you can keep it". The "you" in the famous phrase is "we the people". As the song says, "freedom isn’t free", and unless the people do all that's necessary to keep it, they'll lose their republic and the freedoms it provides.

 

Similarly, avoiding AI apocalypse won't be free. So, let's start with AI regulation, but let's also design and develop institutions and supporting culture that can protect us from the inevitable onslaught of AI that would manipulate our mental and social worlds in a way that we'll never be clever enough to debug.

 

 Join us in our mission to bring the benefits of XAI to everyone.​

Get in touch with us today to learn more about how we're making a difference.