Page 16 - FCW, March/April 2020
P. 16
Artificial Intelligence
Transparency’s role
in AI innovation
Open source tools and a light approach to regulation can speed advances in AI
David Egts
Chief Technologist, North America Public Sector, Red Hat
systems, however, there’s no way to tell how those decisions were made.
Open source technology can play a major role in providing the transparency needed to help identify bias and eliminate it. In addition to having source code that is open, the models and data must be accessible
to third parties so they can independently replicate the results.
Closer collaboration for better outcomes
Predictive analytics, machine learning and AI in general aren’t intended to replace
MOST PEOPLE ARE using artificial intelligence whether they know it or not through
everyday technologies like Siri and Alexa. And then there are other people who are explicitly using it — for instance, through software-as-a-service tools for work or personal use.
AI is moving fast, and regulation can’t keep up with the pace of technological change. Therefore, it’s important that we avoid top-down command and control that would hinder innovation.
U.S. CTO Michael Kratsios published an op-ed on Bloomberg.com in January that outlines the Trump administration’s light- touch approach to regulating AI technology. In particular, the administration wants
to encourage people from academia, industry, nonprofits and the general public to comment on AI rulemaking at federal agencies.
Red Hat is built on the core values of openness, transparency, community and letting the best ideas win no matter where they come from, so it’s gratifying to see that officials at the highest levels of government advocate having a wide range of groups figure out what those roles and policies should be and how we should create rules to promote fairness and transparency.
Eliminating bias and bad decisions
Machine learning involves training a model, and it is critical for leaders to understand how those models reach their conclusions. For instance, a machine learning model’s code may not be biased, but the data used to train the model could be. Further, as
the model is used, it continues to learn. Although its decision-making is improved, the decisions it made in the past may not be the same decisions it would make today or in the future.
Being able to understand why decisions were made is critical to eliminate bias and bad decisions. Sometimes we need to know why an AI model reached the conclusion it did — for example, why it denied a veteran’s disability claim. In addition, machine learning can lead to patterns of bias and indirect racism, which further underscores the need for transparency. With some AI
davooda/Shutterstock/FCW Staff
S-16 SPONSORED CONTENT