Page 31 - FCW, May 2017
P. 31

challenging tasks, sometimes super- seding human performance,” Paper- not said. “Hence, machine learning is becoming pervasive in many applica- tions and is increasingly a candidate for innovative cybersecurity solu- tions.” However, he added that as long as vulnerabilities are not fully understood, the predictions made by machine learning models will remain difficult to trust.
Dumitras said a large number of attacks against machine learning have been discovered in the past decade. “While the problem that the attacker must solve is theoretically hard, it is becoming clear that it is possible to find practical attacks against most practical systems,” he added.
For example, hackers already know how to evade machine learning-based detectors, how to poison the training phase so that the model produces the outputs the attackers want, how to steal a proprietary machine learning model by querying it repeatedly and how to invert a model to learn private information about the users on whom it is based.
Unfortunately, Dumitras said, “there are only a few known defenses, which generally work only for specific attacks and lose their effectiveness when the adversary changes strategies.”
For example, fake news, which can erode trust in the government, is amplified by users clicking on, com- menting on or “liking” fraudulent stories on social media sites such as Facebook, Twitter and Google Plus. That behavior constitutes “a form of poisoning, where the recommenda- tion algorithms operate on unreliable inputs, and they are likely to promote more fake news,” Dumitras said.
Indeed, the embrace of AML has created “very asymmetric warfare for the good guys.... The bad guys have [so far] had the benefits on their side,” said Evan Wright, principal data sci-
Machine learning in action: Cyberattack defense
Cyber defense systems must classify artifacts or activities — such
as executable programs, network traffic or email messages — as benign or malicious.
Machine-learning algorithms start from a few known benign and known malicious examples.
And using them as a starting point, the algorithms learn models of malicious activity without requiring a predetermined description of these activities.
Three ways adversaries can subvert machine learning techniques:
1 Attack the trained model by crafting examples that cause the machine learning algorithm to mislabel an instance or learn a skewed model.
2 Attack the implementation by finding exploitable bugs in the code.
3 Exploit the fact that users often have no knowledge of a machine learning model’s inner workings.
May 2017 FCW.COM 31


































































































   29   30   31   32   33