Page 8 - GCN, June/July 2018
P. 8

                                [BrieFing]
  NIST
Holding algorithms accountable for bias and transparency
updates Risk
Management
Framework
BY MATT LEONARD
Algorithms are increasingly being used to make decisions in the public and private sectors even though they have been shown to deliver biased outcomes in some cases. Several methods for governing algorithms have been proposed, but a new report by the Information Technology and
apps could result in a bad date, but that doesn’t mean they should be regulated.
The researchers said algorithmic accountability has three goals: promoting desirable or beneficial outcomes, protecting against undesirable or harmful outcomes, and ensuring that laws that apply to human decisions can be effectively
BY SARA FRIEDMAN
The National Institute of Standards
and Technology is updating its Risk Management Framework to help public- and private-sector organizations better protect critical infrastructure and individuals’ privacy.
The new version addresses how organizations can assess and manage risks to their data and systems by focusing on protecting individuals’ personally identifiable information. Information security and privacy programs share responsibility for managing risks from unauthorized system activities or behaviors, making their goals complementary and coordination essential, the draft states.
The update also ties the RMF more closely to the Cybersecurity Framework.
“Until now, federal agencies
had been using the RMF and CSF separately,” wrote NIST Fellow Ron Ross, one of the publication’s authors, in a blog post. “The update provides cross-references so that organizations using the RMF can see where and how the CSF aligns with the current steps in the RMF.”
Although the frameworks are optional for the private sector, federal agency compliance with the RMF became mandatory under the Federal Information Security Modernization Act of 2014. Agencies were also directed to comply with the CSF under a May 2017 executive order.
The updated RMF also integrates security and privacy into systems development, connects senior leaders to operations to better prepare for RMF execution, and supports security and privacy safeguards from NIST’s Special Publication 800-53 Revision 5. •
Innovation Foundation’s Center for Data Innovation argues that previous proposals fall short.
Instead, the researchers outline a method of “algorithmic accountability” meant to protect against undesirable outcomes.
According to the report, previous efforts to combat bias fall into
four categories: the algorithmic transparency or explainability mandate, the creation of a regulatory body to oversee algorithms, general regulation and simply leaving algorithms alone.
Each proposal has its faults,
the report states. If all artificial intelligence must be explainable,
for example, then the technology
is held to a higher standard than
we apply to human decision-
making. Meanwhile, some kinds of algorithmic implementations don’t need regulations. For example, dating
applied to algorithmic decisions. The authors concluded that a
governance framework should use
a variety of controls to ensure that operators can verify that an algorithm works in accordance with the operator’s intentions and identify and rectify harmful outcomes. In general, an algorithm should prioritize accuracy over transparency.
However, the report also quotes Caleb Watney, a technology policy fellow at the R Street Institute, who said that because sunshine laws have set a precedent for transparency in the justice system, it might be appropriate to “mandate all algorithms that influence judicial decision-making be open-source.”
The center’s researchers said it would be reasonable to mandate that government agencies conduct an impact assessment process for any algorithms they plan to use. •
 8 GCN JUNE/JULY 2018 • GCN.COM
DIMITRY A/SHUTTERSTOCK































































   6   7   8   9   10