Page 32 - FCW, May 2017
P. 32

ExecTech
entist at threat intelligence company
Anomali. “The good guys are forced to block everything.”
The need for research
However, the good guys are not totally out of luck. By better understanding the vulnerabilities of their machine learning algorithms, Papernot said gov- ernment agencies can take a big first step toward mapping their attack sur- face. He recommended that agencies start with software like cleverhans, a Python library for benchmarking machine learning systems’ vulnerabil- ity to adversarial examples.
“Once a machine learning model is deployed and adversaries can inter- act with it — even in a limited fashion such as an API — it should be assumed that motivated adversaries are capable of reverse-engineering the model and potentially the data it was trained on,”
Papernot said. Therefore, he added, agencies should closely monitor the privacy implications associated with training the model.
Vorobeychik said public-sector IT professionals can get ahead of the problem by considering all potential vulnerabilities and conducting penetra- tion testing for any machine learning algorithms they might institute.
Dumitras said it is possible to pre- vent machine learning-based attacks in specific cases, but significant new research into developing effective defenses is needed.
For example, if the adversary “can- not query the machine learning system, does not have access to the training set, does not know the design of the system or the features it uses and does not have access to the implementa- tion, it would be challenging to craft adversarial samples.”
However, those assumptions might be unrealistic, Dumitras added. Many government systems rely on open- source machine learning libraries, so adversaries can freely examine the code for potential weaknesses. “It may be tempting, in this case, to turn to ‘security through obscurity,’ by hid- ing as much information as possible about how the system operates,” he said. “But recent black-box attacks suggest that it is possible to craft effec- tive adversarial samples with minimal information about the system.”
Likewise, sanitizing input data can reveal suspicious information before it is provided to the machine learning algorithm, but manual sanitization can- not be done at scale, Dumitras said. “Ultimately, more basic research is needed to develop effective defenses against adversarial machine learning attacks,” he added. n
MOBILE. TABLET. DESKTOP. PRINT.
WHERE YOU
NEED US
MOST. FCW.COM
platforms-FCW-hpa.indd 1 2/3/17 9:49 AM
32 May 2017 FCW.COM


































































































   30   31   32   33   34