AI Explained: The BonsXAI Blog

Navigating the AI Tidal Wave

Author: Bedie Moran

Photo by DeepMind on Unsplash

When OpenAI launched its user-friendly ChatGPT3 consumer application just a few short months ago, tdiscussions on the risks of AI have mostly centered around the authenticity of authorship, threats to human creativity, decision making, fraud, and the potential to make many jobs obsolete. Every industry is now assessing the impact of AI on their work, from educators, and lawyers to government leaders and healthcare professionals.

 

Yet with all the attention consumer applications like ChatGPT are getting there remains a more pervasive and less visible application of AI that is also witnessing exponential growth: the multitude of algorithmic-driven Machine Learning and Deep Learning (ML/ DL) models that have become a seminal part of the social media empires and are quickly expanding into nearly every facet of society. Greater utilization of these models in government and industry offers the promise of large gains in productivity and wealth creation for societies in the coming years.

 

Photo by DeepMind on Unsplash

 

Yet due to the complexity of these models very little is understood as to how they work and arrive at their predictions. They are commonly referred to “Black Box Models.” Often the results of these ML models can be explained only by their technical creators. Users and Subject Matter Experts (SMEs) who often initiate the case studies are usually left wondering how the results were determined, and how to improve them. Deep learning models are data hungry and require a greater magnitude of data to perform and that means they must learn millions of parameters. Higher performance means higher complexity which is a tradeoff for transparency.

 

Many users of these AI models are becoming acutely aware of the importance of transparency, fairness, and accuracy as they attempt to undertake the difficult task of understanding a models’ results. As AI usage and investment increase, the transparency of these models will be required to evaluate the risks. Institutions, businesses, and government leaders will need to ensure that AI is used in responsible ways. Legislation to protect society by ensuring that AI models remain fair and transparent, and that prevent misuse is already being considered in the EU and is likely to pass by year end. In the USA proposed guidelines have been issued by both the Department of Commerce’s National institute of Standards and Technology (NIST) and the White House has issued an AI bill of Rights.

 

XAI is an emerging technology that offers insights into a models’ predictions. Organizations looking to build use cases with ML/DL Models should consider an Explainable AI (XAI) platform to incorporate transparency and to explain the results. A well-developed XAI platform can detect issues of unfairness and bias and recommend steps to improve the model’s balance, performance, and accuracy. XAI platforms built with natural language and data storytelling, engages users and SMEs allowing them to evaluate model reasoning and risk, then work to improve them.

 

Through consistent integration of XAI technology, enterprise leadership and government authorities can gain access to reports that assure safe, transparent, and fair model usage. Overall XAI has come just at the right time, as it enables us to both reap the many benefits of ML models while attempting to ensure its safety, transparency, and fairness.

 

 Join us in our mission to bring the benefits of XAI to everyone.​

Get in touch with us today to learn more about how we're making a difference.