Page 15 - FCW, July30, 2016
P. 15

STEVE KELMAN is professor of public management at Harvard University’s Kennedy School of Government and former administrator of the Office of Federal Procurement Policy.
Tech in the service of performance measurement
DHS’ innovations with natural language processing for intelligence analysis could be useful across government
I recently met with Tammy Tippie, who has been the performance improvement officer at the De- partment of Homeland Security’s Office of Intelligence and Analysis for the past two years. Tippie was recently in Boston to visit the lo- cal fusion center for “connect the dots” intelligence collaboration, and we had lunch together.
It turned out to be the most interesting lunch I’ve had in weeks because it opened my eyes wide to some fascinating recent devel- opments in technology-aided performance measurement for intelligence analysts that DHS has pioneered in the intelligence community.
As regular readers might be aware, I teach performance mea- surement in one of our flagship executive education programs
at Harvard University’s Kennedy School, a course called “Senior Executive Fellows” for GS-15s
and colonels. Our students always include a number of people from the intelligence community, mostly analysts, and I have long been con- cerned that my classes were not adding value for them because it is hard to develop good performance measures for intelligence analysis.
Thanks to Tippie, I now know better.
What she told me was a revela- tion, frankly. The key to the DHS system is natural language process- ing software. Every piece of report- ing and analysis that DHS intelli- gence analysts prepare is archived
in a machine-readable database. Then the natural language process- ing software “reads” each docu- ment and gives it a score based on how well it answers the sorts of questions such documents are sup- posed to answer.
The software must be popu- lated with templates, developed by humans, that show what a good answer is. The software cranks
Every piece of reporting and analysis that DHS intelligence analysts prepare is archived in a machine-readable database.
out an answer for each document based on those templates.
Other technology-related ele- ments of the performance mea- surement system include the ability to track how often other analysts click through to a document, how long a reader spends with the document on average and how often the document is forwarded to other analysts.
The organization also has one low-tech customer feedback team that analyzes each document for how well it comports with trade- craft standards for the format and
structure of an argument, and another surveys customers on their satisfaction with every product.
Tippie’s unit organizes a monthly cycle of four meetings, one every week, to discuss the data. One reviews Office of Intelligence and Analysis performance metrics while another focuses on metrics related to mission support, such as help desk requests or training.
At some point, DHS officials should be able to review the trends to see if the software is showing improved document quality over time, but the system hasn’t been in use long enough to do that yet.
Natural language analysis has already revealed two areas in which analysts’ reports are fre- quently problematic: incomplete assessments of the implications
if the data on which the analysis was based proves to be wrong, and failure to sufficiently indicate the sources for judgments so readers can assess their credibility.
The office has assigned mentors to employees who consistently showed such shortcomings. The mentors have coached them on how to improve and assigned the best analysts to review the work the employees do after they have been mentored.
I suspect those techniques could be used by almost any organization whose employees prepare analyses and reports. I wonder if any read- ers’ agencies, inside or outside the intelligence community, are using such methods as well. n
July 30, 2016 FCW.COM 15

   13   14   15   16   17