Page 33 - GCN, April/May 2018
P. 33

                                ing means efforts like Project Maven are “not even close to pulling the trigger,” a reference to the autonomous weapons that some AI critics fear. “We need to be testing; we need to be prototyping,” he added.
As DOD rolls out algorithms for testing, the CSIS report notes that the rise of algorithmic warfare has implications for future weapons. Among them is the integration of machine intelligence into current C4ISR systems.
“Perhaps the most transformative applications of military machine intelligence are in command and control,” the report states. “MI-en- abled C2 could develop entirely novel strategies, anticipate enemy tac- tics, accelerate intelligence, surveillance, and reconnaissance and help coordinate activities of large numbers of dispersed units acting in tan- dem. As a greater share of decision-making on the battlefield happens at machine speed, human thinkers may be unable to keep up.”
Indeed, a number of machine learning and analytics startups advised by retired military officers are beginning to explore new approaches that extend beyond current predictive analytics. One approach, dubbed “abductive reasoning,” is touted as helping battlefield commanders un- derstand enemy intentions by, for example, distinguishing a bluff from an actual attack.
Approaches like abductive reasoning and scenario-based wargaming are among the collaborative technologies embraced in the CSIS report. Elliot said military applications should for now focus on narrow, often tedious analytical tasks rather than true autonomy.
USING AI ‘IN THE RIGHT WAY’
Along with developing standards for ethics and safety, Elliot said a broader federal role in promoting and managing machine intelligence should take the long view. That includes familiar recommendations such as funding high-risk, high-payoff research and promoting work- force skills and other market mechanisms — for instance, by lowering barriers to entry for technology startups. The effort could also expand recent open-source efforts designed to make government data more ac- cessible, he added.
Funding for a government/industry research consortium could be funneled through the Defense Advanced Research Projects Agency, the Defense Innovation Unit Experimental and, for university-based research, the National Science Foundation, Elliot said.
Congress is also weighing in on the need for a national machine intel- ligence strategy. The House Oversight and Government Reform Com- mittee’s IT Subcommittee is holding a series of hearings on how govern- ment agencies can adopt game-changing AI technologies. DARPA and other agency officials echoed calls for investments into research and development, broader access to government data and boosting the AI workforce through computer science and other education.
A subcommittee hearing in April focused on establishing guidelines for promoting machine intelligence development while ensuring “we’re using it in the right way,” said Rep. Will Hurd (R-Texas), the subcom- mittee’s chairman.
As the pace of development quickens, a consensus is building that the U.S. must lead from the front. “Machine learning has the potential to be transformative,” Elliot said. “It’s important that \[the U.S.\] not fall behind.”•
   README
  WHAT:
“Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability,” a report by the AI Now Institute at New York University, an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence.
WHY:
As public agencies increasingly turn to automated processes and algorithms to make decisions, they
need frameworks for accountability that can address inevitable questions on topics such as software bias and the system’s impact on the community. The AI Now Institute’s assessment gives public agencies a practical way to evaluate automated decision systems and ensure public accountability.
PROPOSAL:
Just as an environmental impact statement can increase agencies’ sensitivity to environmental values and effectively inform the public of coming changes, an algorithmic impact assessment aims to do the same for algorithms before governments put them to use. The process starts with a pre-acquisition review in which an agency, other public officials and the public at large are given a chance to review the proposed technology before the agency enters into any formal agreements.
Part of that process would include defining what the agency considers an “automated decision system,” disclosing details about the technology and its use, evaluating the potential for bias and inaccuracy, and planning for third-party researchers to study the system after it becomes operational.
Public comment should be solicited before any AI-enabled systems begin operation, the report states. In addition, a due-process period would allow outside groups or individuals to challenge an agency on its compliance with an impact assessment. Once an automated decision system is deployed, the affected communities should be notified.
Algorithmic impact assessments would encourage agencies “to better manage their own technical systems and become leaders in the responsible integration
of increasingly complex computational systems in governance.” They would also provide an opportunity for vendors to foster public trust in their systems.
Once implemented, impact assessments should be renewed on a regular basis, according to the report.
Full report: is.gd/GCN_AINow
GCN APRIL/MAY 2018 • GCN.COM 33
     








































































   31   32   33   34   35