Author: Peter Ball
"The Game" is one of my favorite episodes of Star Trek: The Next Generation. Commander Riker returns from vacationing on the pleasure planet Risa with replicated copies of an addictive augmented reality game that controls minds, rendering the player susceptible to the Ktarians' master plan to take control of the starship Enterprise. Happily, the plot is narrowly foiled by the heroic efforts of young cadet Wesley Crusher.
The lesson of "The Game" is that people are woefully susceptible to manipulation. AI is in many ways much more dangerous than the fictional game featured in Star Trek. The Game was distinct and easily observed. Prior to forced exposure, Wesley Crusher could readily recognize his colleagues’ entrapment and act accordingly - by repairing the sabotaged android Data. In the real world, AI is obscure, cloaked in layers of opaque technology. Without sophisticated tools, it's impossible to tell bona fide from bogus. That's why we at bonsXAI have developed a complete suite of accessible tools to unveil and explain AI through simple dashboards and human-centered data storytelling.
In the Star Trek episode, the attack was launched by nefarious Ktarians. Ever the macho cliché, Commander Riker falls for military spy and seductress Etana Jol, exposing the Enterprise and potentially all of Star Fleet to her evil plan. What a chump! In the real world, there are plenty of bad actors, but it helps to know when someone may be trying to manipulate you. Forewarned is forearmed. A more isidious danger lurks in the shadows of our ignorance where we don't suspect adversarial conduct and where biased context unknowingly warps our frame of reference. We risk blissfully getting sucked into a humanity-scale doom loop. Arguably, it's already underway. Many very smart people warn that AI could propel humanity into a downward spiral to oblivion.
In his book, WTF?: What's the Future and Why It's Up to Us, futurist Tim O'Reilly describes "how the market, the cornerstone of capitalism, is on its way to becoming that long-feared rogue AI, enemy to humanity ...". I wouldn't describe 'capitalism run-amok' as an AI. To me, it is merely an example of ignorance of complexity combined with our propensity to oversimplify (capitalism vs. communism) and engage in self righteous judgment (good vs. evil). The result: we cement ourselves to a biased frame of reference. O'Reilly goes on to describe how we arrived at "our Skynet moment" wherein the "machine had bugun its takover". By putting our faith in Adam Smith's "invisible hand", Milton Friedman's gospel of shareholder value maximization and Gordon Gekko's "greed is good", we've adopted a system of thinking and acting that is particularly vulnerable to human tragedy accelerated by AI. Driven to maximize shareholder return, corporations build AI systems designed for corporate profit, regardless of the consequences for humanity. By design, they lead us into temptation and deliver us unto evil.
Yogi Berra famously said "You can observe a lot by just watching". True enough if you're watching a baseball game. Just by watching, we observe that success in competitive free market economies requires adoption of AI technology. But in our Skynet moment, we need special scopes to expose truths and dangers hidden behind obscure AI systems. Policy makers and governments around the world are hard at work developing regulations to protect citizens from dangers of AI run amok. If you want to compete, now is a good time to build a solid foundation for a future of quickly evolving AI technology and regulatory compliance requirements.